我已经在python中制作了一个网络抓取工具,以便向我提供有关各个博彩网站的免费投注优惠何时发生变化或新添加的信息。
但是,这些博彩公司倾向于记录与IP流量和MAC地址相关的信息,以便标记匹配的更好的用户。
在urllib.request模块中使用Request()方法时,如何欺骗我的IP地址?
我的代码如下:
req = Request('https://www.888sport.com/online-sports-betting-promotions/', headers={'User-Agent': 'Mozilla/5.0'})
site = urlopen(req).read()
content = bs4.BeautifulSoup(site, 'html.parser')
答案 0 :(得分:0)
这可以帮助您匿名浏览。您可以使用一些免费的代理站点来获取代理并更新proxy = {}。
import requests
from bs4 import BeautifulSoup
url = ''
proxy = {"http":"http://","https":"http://"}
session = requests.session()
response = session.get(url,headers={'User-Agent': 'Mozilla/5.0'},proxies=proxy)
content = BeautifulSoup(response, 'html.parser')
答案 1 :(得分:0)
前段时间我遇到了同样的问题。这是我正在使用的代码段,以进行匿名抓取。
from urllib.request import Request, urlopen
from fake_useragent import UserAgent
import random
from bs4 import BeautifulSoup
from IPython.core.display import clear_output
# Here I provide some proxies for not getting caught while scraping
ua = UserAgent() # From here we generate a random user agent
proxies = [] # Will contain proxies [ip, port]
# Main function
def main():
# Retrieve latest proxies
proxies_req = Request('https://www.sslproxies.org/')
proxies_req.add_header('User-Agent', ua.random)
proxies_doc = urlopen(proxies_req).read().decode('utf8')
soup = BeautifulSoup(proxies_doc, 'html.parser')
proxies_table = soup.find(id='proxylisttable')
# Save proxies in the array
for row in proxies_table.tbody.find_all('tr'):
proxies.append({
'ip': row.find_all('td')[0].string,
'port': row.find_all('td')[1].string
})
# Choose a random proxy
proxy_index = random_proxy()
proxy = proxies[proxy_index]
for n in range(1, 20):
req = Request('http://icanhazip.com')
req.set_proxy(proxy['ip'] + ':' + proxy['port'], 'http')
# Every 10 requests, generate a new proxy
if n % 10 == 0:
proxy_index = random_proxy()
proxy = proxies[proxy_index]
# Make the call
try:
my_ip = urlopen(req).read().decode('utf8')
print('#' + str(n) + ': ' + my_ip)
clear_output(wait = True)
except: # If error, delete this proxy and find another one
del proxies[proxy_index]
print('Proxy ' + proxy['ip'] + ':' + proxy['port'] + ' deleted.')
proxy_index = random_proxy()
proxy = proxies[proxy_index]
# Retrieve a random index proxy (we need the index to delete it if not working)
def random_proxy():
return random.randint(0, len(proxies) - 1)
if __name__ == '__main__':
main()
这将创建一些有效的代理。而这部分:
user_agent_list = (
#Chrome
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36',
'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36',
'Mozilla/5.0 (Windows NT 5.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36',
'Mozilla/5.0 (Windows NT 6.2; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.90 Safari/537.36',
'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36',
'Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36',
'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36',
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36',
'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36',
#Firefox
'Mozilla/4.0 (compatible; MSIE 9.0; Windows NT 6.1)',
'Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko',
'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)',
'Mozilla/5.0 (Windows NT 6.1; Trident/7.0; rv:11.0) like Gecko',
'Mozilla/5.0 (Windows NT 6.2; WOW64; Trident/7.0; rv:11.0) like Gecko',
'Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko',
'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0; Trident/5.0)',
'Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko',
'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)',
'Mozilla/5.0 (Windows NT 6.1; Win64; x64; Trident/7.0; rv:11.0) like Gecko',
'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0)',
'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0)',
'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)'
)
这将创建不同的“标题”,并假装是浏览器。 最后但并非最不重要的一点就是将它们输入到您的request()中。
# Make a get request
user_agent = random.choice(user_agent_list)
headers= {'User-Agent': user_agent, "Accept-Language": "en-US, en;q=0.5"}
proxy = random.choice(proxies)
response = get("your url", headers=headers, proxies=proxy)
希望能解决您的问题。
否则请看这里:https://www.scrapehero.com/how-to-fake-and-rotate-user-agents-using-python-3/
欢呼
答案 2 :(得分:0)
为了克服IP速率限制并隐藏您的真实IP,您需要使用代理。有很多不同的服务可以提供代理。考虑使用它们作为您自己管理代理确实是一件令人头疼的事,而且成本会高得多。我建议https://botproxy.net等。它们通过单个端点提供旋转代理。使用此服务的请求方法如下:
#!/usr/bin/env python
import urllib.request
opener = urllib.request.build_opener(
urllib.request.ProxyHandler(
{'http': 'http://user-key:key-password@x.botproxy.net:8080',
'https': 'http://user-key:key-password@x.botproxy.net:8080'}))
print(opener.open('https://httpbin.org/ip').read())
或使用请求库
import requests
res = requests.get(
'http://httpbin.org/ip',
proxies={
'http': 'http://user-key:key-password@x.botproxy.net:8080',
'https': 'http://user-key:key-password@x.botproxy.net:8080'
},
headers={
'X-BOTPROXY-COUNTRY': 'US'
})
print(res.text)
他们在不同的国家/地区也都有代理。