添加用户代理无助于解决“ requests.exceptions.ConnectionError”

时间:2019-08-30 06:57:54

标签: python-requests python-3.7

我正在尝试从以下网站自动下载电影;

http://renrencili8.org/

下面是我的代码:

import requests,bs4,pyperclip,webbrowser
from urllib.request import quote

movie='复仇者联盟'
#movie name

url='http://renrencili8.org/query/'+quote(movie)+'/1-0-0/'
#url of the search page of the move

headers={'user-agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.80 Safari/537.36'}
#add a header in response to the anti-crawler scheme

res=requests.get(url,headers=headers)
#download the seach page of the movie

bssearch=bs4.BeautifulSoup(res.text,'html.parser')
#parse the page

search_link=bssearch.select('.item dt sup a')
# select the link by the tags of the page

print(search_link)
# I assumed a dict would be printed, showing the link

但结果却是:

  

requests.exceptions.ConnectionError:('Connection aborted。',ConnectionResetError(10054,

我已经尝试研究该解决方案,其中大多数建议这是因为应添加反爬虫方案和一个用户代理。

我已经在代码中添加了标头,但是并不能解决问题。

为什么会出错

  

requests.exceptions.ConnectionError:('Connection aborted。',ConnectionResetError(10054,

还在吗?

即使我已经在下面的代码中添加了用户代理:

headers={'user-agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.80 Safari/537.36'}
#add a header in response to the anti-crawler scheme

res=requests.get(url,headers=headers)
#download the seach page of the movie

那我该怎么解决呢?

0 个答案:

没有答案