Urllib不良请求问题

时间:2016-06-30 02:03:22

标签: python beautifulsoup python-3.4 urllib bad-request

我在here中尝试了每个'User-Agent',但仍然得到urllib.error.HTTPError: HTTP Error 400: Bad Request。我也试过了this,但我得到了urllib.error.URLError: File Not Found。我不知道该怎么做,我目前的代码是;

from bs4 import BeautifulSoup
import urllib.request,json,ast

with open ("urller.json") as f:
    cc = json.load(f) #the file I get links, you can try this link instead of this
    #cc = ../games/index.php?g_id=23521&game=0RBITALIS 

for x in ast.literal_eval(cc): #cc is a str(list) so I have to convert
    if x.startswith("../"):

        r = urllib.request.Request("http://www.game-debate.com{}".format(x[2::]),headers={'User-Agent': 'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11'})
        #x[2::] because I removed '../' parts from urlls

        rr = urllib.request.urlopen(r).read()
        soup = BeautifulSoup(rr)

        for y in soup.find_all("ul",attrs={'class':['devDefSysReqList']}):
            print (y.text)

编辑:如果您只尝试1个链接,可能不会显示任何错误,因为每次第6个链接都会收到错误。

1 个答案:

答案 0 :(得分:1)

快速解决方法是用+替换空格:

url = "http://www.game-debate.com"
r = urllib.request.Request(url + x[2:] ,headers={'User-Agent': 'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11'})

更好的选择可能是让urllib quote参数:

from bs4 import BeautifulSoup
import urllib.request,json,ast
from urllib.parse import quote, urljoin

with open ("urller.json") as f:
    cc = json.load(f) #the file I get links, you can try this link instead of this
    url = "http://www.game-debate.com"


    for x in ast.literal_eval(cc):  # cc is a str(list) so I have to convert
        if x.startswith("../"):
            r = urllib.request.Request(urljoin(url, quote(x.lstrip("."))), headers={
                'User-Agent': 'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11'})

            rr = urllib.request.urlopen(r).read()
            soup = BeautifulSoup(rr)
            print(rr.decode("utf-8"))

            for y in soup.find_all("ul", attrs={'class':['devDefSysReqList']}):
                print (y.text)

网址中的空格无效,需要将其编码为%20或用+替换。