PYTHON:在APSX中提交查询,并从aspx页面中抓取结果

时间:2014-03-06 10:24:02

标签: python asp.net http

我想从“http://www.ratsit.se/BC/SearchPerson.aspx”中的人那里获取剪贴板信息,我正在编写以下代码:

import urllib
from bs4 import BeautifulSoup

headers = {
'Accept':'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Origin': 'http://www.ratsit.se',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.17 (KHTML, like Gecko)  Chrome/24.0.1312.57 Safari/537.17',
'Content-Type': 'application/x-www-form-urlencoded',
'Referer': 'http://www.ratsit.se/',
'Accept-Encoding': 'gzip,deflate,sdch',
'Accept-Language': 'en-US,en;q=0.8',
'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3'
}

class MyOpener(urllib.FancyURLopener):
version = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.17 (KHTML, like Gecko)     Chrome/24.0.1312.57 Safari/537.17'

myopener = MyOpener()
url = 'http://www.ratsit.se/BC/SearchPerson.aspx'
# first HTTP request without form data
f = myopener.open(url)
soup = BeautifulSoup(f)
# parse and retrieve two vital form values
viewstate = soup.select("#__VIEWSTATE")[0]['value']
#eventvalidation = soup.select("#__EVENTVALIDATION")[0]['value']

formData = (

('__LASTFOCUS',''),
('__EVENTTARGET',''),
('__EVENTARGUMENT',''),
#('__EVENTVALIDATION', eventvalidation),
('__VIEWSTATE', viewstate),
('ctl00$cphMain$txtFirstName', 'name'), 
('ctl00$cphMain$txtLastName', ''),  
('ctl00$cphMain$txtBirthDate', ''),                                                          # etc. (not all listed)
('ctl00$cphMain$txtAddress', ''),   
('ctl00$cphMain$txtZipCode', ''),  
('ctl00$cphMain$txtCity', ''),  
('ctl00$cphMain$txtKommun',''),
#('btnSearchAjax','Sök'),
)

encodedFields = urllib.urlencode(formData)
 # second HTTP request with form data
f = myopener.open(url, encodedFields)

try:
 # actually we'd better use BeautifulSoup once again to
 # retrieve results(instead of writing out the whole HTML file)
 # Besides, since the result is split into multipages,
 # we need send more HTTP requests
 fout = open('tmp.html', 'w')
except:
 print('Could not open output file\n')
 fout.writelines(f.readlines())
 fout.close()

我收到来自服务器的响应“我的ip是阻止”,但不是真的因为我正在使用浏览器工作...任何建议我出错了..

由于

1 个答案:

答案 0 :(得分:0)

您的代码不起作用。

  File "/Users/florianoswald/git/webscraper/scrape2.py", line 16
  version = 'Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.17 (KHTML, like Gecko)     Chrome/24.0.1312.57 Safari/537.17'
      ^
  IndentationError: expected an indented block

应该是一个类定义?为什么我们还需要MyOpener课?这也有效:

myopener = urllib.FancyURLopener()
my.open("http://www.google.com")
<addinfourl at 4411860752 whose fp = <socket._fileobject object at 0x106ed1c50>>