我正在尝试使用urllib2打开多个页面。问题是某些页面无法打开。它返回urllib2.HTTPerror: HTTP Error 400: Bad Request
我从另一个网页获得了这些页面的hrefs(在页面的头部是charset =“utf-8”)。 当我尝试在网址中打开包含“č”,“ž”或“ř”的页面时,才会返回错误。
以下是代码:
def getSoup(url):
req = urllib2.Request(url)
response = urllib2.urlopen(req)
page = response.read()
soup = BeautifulSoup(page, 'html.parser')
return soup
hovienko = getSoup("http://www.hovno.cz/hovna-az/a/1/")
lis = hovienko.find("div", class_="span12").find('ul').findAll('li')
for liTag in lis:
aTag = liTag.find('a')['href']
href = "http://www.hovno.cz"+aTag """ hrefs, I'm trying to open using urllib2 """
soup = getSoup(href.encode("iso-8859-2")) """ here occures errors when 'č','ž' or 'ř' in url """
有人知道,我必须做些什么才能避免错误?
谢谢
答案 0 :(得分:1)
解决方案非常简单。我应该使用urllib2.quote()。
编辑代码:
for liTag in lis:
aTag = liTag.find('a')['href']
href = "http://www.hovno.cz"+urllib2.quote(aTag.encode("utf-8"))
soup = getSoup(href)
答案 1 :(得分:0)
这里有很多事情。
首先,您的URI不能包含非ASCII。你必须更换它们。看到这个: How to fetch a non-ascii url with Python urlopen?
其次,为自己保存一个痛苦的世界,并使用requests来获取HTTP内容。
答案 2 :(得分:0)
此网站为UTF-8。为什么你需要href.encode(“iso-8859-2”)?我已经从http://programming-review.com/beautifulsoasome-interesting-python-functions/
获取了下一个代码 import urllib2
import cgitb
cgitb.enable()
from BeautifulSoup import BeautifulSoup
from urlparse import urlparse
# print all links
def PrintLinks(localurl):
data = urllib2.urlopen(localurl).read()
print 'Encoding of fetched HTML : %s', type(data)
soup = BeautifulSoup(data)
parse = urlparse(localurl)
localurl = parse[0] + "://" + parse[1]
print "<h3>Page links statistics</h3>"
l = soup.findAll("a", attrs={"href":True})
print "<h4>Total links count = " + str(len(l)) + '</h4>'
externallinks = [] # external links list
for link in l:
# if it's external link
if link['href'].find("http://") == 0 and link['href'].find(localurl) == -1:
externallinks = externallinks + [link]
print "<h4>External links count = " + str(len(externallinks)) + '</h4>'
if len(externallinks) > 0:
print "<h3>External links list:</h3>"
for link in externallinks:
if link.text != '':
print '<h5>' + link.text.encode('utf-8')
print ' => [' + '<a href="' + link['href'] + '" >' + link['href'] + '</a>' + ']' + '</h5>'
else:
print '<h5>' + '[image]',
print ' => [' + '<a href="' + link['href'] + '" >' + link['href'] + '</a>' + ']' + '</h5>'
PrintLinks( "http://www.zlatestranky.cz/pro-mobily/")