好吧,我正在尝试使用开胃菜和漂亮的汤来从页面中提取一些信息,我认为这就是问题出现的地方。我需要使用开启工具,因为我需要通过Tor路由它,因为我认为它们阻止了多个请求。
(如果这都是未格式化的,我会立即编辑,因为通常会发生奇怪的事情。)
以下是代码:
def getsite():
proxy = urllib2.ProxyHandler({"http" : "127.0.0.1:8118"})
opener = urllib2.build_opener(proxy)
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
url = opener.open('https://www.website.com')
try:
page = BeautifulSoup(urllib2.urlopen(url).read())
except Exception as Err:
errorlist.append('Unexpected Error ' + str(Err))
time.sleep(60)
page = BeautifulSoup(urllib2.urlopen(url).read())
values = page.findAll("strong")
high = values[2]
low = values[1]
last = values[0]
vol = values[3]
high = str(high)
low = str(low)
last = str(last)
vol = str(vol)
high = high[8:-13]
low = low[8:-13]
last = last[8:-13]
vol = vol[8:-24]
print high, low, last, vol
while True:
getsite()
time.sleep(3200)
它引发了这个错误。
page = BeautifulSoup(urllib2.urlopen(url).read()) File "C:\Python27\lib\urllib2.py", line 126, in urlopen
return _opener.open(url, data, timeout) File "C:\Python27\lib\urllib2.py", line 392, in open
protocol = req.get_type() AttributeError: addinfourl instance has no attribute 'get_type'
答案 0 :(得分:6)
看起来您正在使用开启者对象,就好像它是一个URL:
page = BeautifulSoup(urllib2.urlopen(url).read())
其中url
是打开的开场白...而是,执行:
page = BeautifulSoup(url.read())