寻找避免爬网时被禁止的方法

时间:2018-07-17 12:52:26

标签: python-3.x request instagram

我在Python中对页面https://www.instagram.com/explore/tags/some_hashtag/?__a=1的请求很多。这是代码:

def LoadUserAgents(uafile):
    """
    uafile : string
        path to text file of user agents, one per line
    """
    uas = []
with open(uafile, 'rb') as uaf:
    for ua in uaf.readlines():
        if ua:
            uas.append(ua.strip())
random.shuffle(uas)
return uas

address = f'https://www.instagram.com/explore/tags/{hashtag[1:]}/?__a=1'
uas = LoadUserAgents("user-agents.txt")
ua = random.choice(uas)
headers = {
    "Connection" : "close",  
    "User-Agent" : ua}

r = requests.get(address, proxies=proxy, timeout=30, headers=headers)

文本文件“ user-agents.txt”来自here

变量proxy的示例为proxy={'http': 'http://104.196.45.252:80'}

而且我仍然可以在日志中定期看到我短时间被禁止。

{'message': 'Please wait a few minutes before you try again.', 'status': 'fail'}

在这样的禁令之后,我立即更改了代理和用户代理,但以下请求也显示我被禁。

[Crawler @ 17_07_2018_15h29m34s] 
Error message:{'message': 'Please wait a few minutes before you try again.', 'status': 'fail'} 
Proxy:{'http': 'http://104.196.45.252:80'}
Header: {'Connection': 'close', 'User-Agent': b'Mozilla/5.0 (Windows; U; Windows NT 5.0; fr; rv:1.8.1.9pre) Gecko/20071102 Firefox/2.0.0.9 Navigator/9.0.0.3'}

[Crawler @ 17_07_2018_15h29m44s]
Error message: {'message': 'Please wait a few minutes before you try again.', 'status': 'fail'} 
Proxy:{'http': 'http://52.77.242.220:80'} 
Header: {'Connection': 'close', 'User-Agent': b'Mozilla/5.0 (Windows; U; Windows NT 5.1; es-ES; rv:1.7.3) Gecko/20040910'}

....

有什么想法我做错了什么,或者应该在其中添加些什么以避免出现问题?

谢谢!

1 个答案:

答案 0 :(得分:-2)

尝试为https流量提供代理-在您提供的代理没有被使用的时候。