我正在为yellowpages.com进行网络抓取工作,这似乎总体上运作良好。但是,在遍历长查询的分页时,requests.get(url)将随机返回<Response [503]>
或<Response [404]>
。偶尔,我会收到更糟糕的例外情况,例如:
requests.exceptions.ConnectionError: HTTPConnectionPool(host =&#39; www.yellowpages.com&#39;,port = 80):最大重试次数 超过网址: / search?search_terms = florists&amp; geo_location_terms = FL&amp; page = 22(由 NewConnectionError(&#39;:无法建立新连接: [WinError 10053]已建立的连接已被软件中止 在您的主机&#39;,))
使用time.sleep()似乎可以消除503错误,但404和异常仍然存在问题。
我试图找出如何抓住&#34;各种响应,所以我可以进行更改(等待,更改代理,更改用户代理),然后重试和/或继续。伪代码是这样的:
If error/exception with request.get:
wait and/or change proxy and user agent
retry request.get
else:
pass
此时,我甚至无法使用以下方式捕获问题:
try:
r = requests.get(url)
except requests.exceptions.RequestException as e:
print (e)
import sys #only added here, because it's not part of my stable code below
sys.exit()
我从github及以下开始的完整代码:
import requests
from bs4 import BeautifulSoup
import itertools
import csv
# Search criteria
search_terms = ["florists", "pharmacies"]
search_locations = ['CA', 'FL']
# Structure for Data
answer_list = []
csv_columns = ['Name', 'Phone Number', 'Street Address', 'City', 'State', 'Zip Code']
# Turns list of lists into csv file
def write_to_csv(csv_file, csv_columns, answer_list):
with open(csv_file, 'w') as csvfile:
writer = csv.writer(csvfile, lineterminator='\n')
writer.writerow(csv_columns)
writer.writerows(answer_list)
# Creates url from search criteria and current page
def url(search_term, location, page_number):
template = 'http://www.yellowpages.com/search?search_terms={search_term}&geo_location_terms={location}&page={page_number}'
return template.format(search_term=search_term, location=location, page_number=page_number)
# Finds all the contact information for a record
def find_contact_info(record):
holder_list = []
name = record.find(attrs={'class': 'business-name'})
holder_list.append(name.text if name is not None else "")
phone_number = record.find(attrs={'class': 'phones phone primary'})
holder_list.append(phone_number.text if phone_number is not None else "")
street_address = record.find(attrs={'class': 'street-address'})
holder_list.append(street_address.text if street_address is not None else "")
city = record.find(attrs={'class': 'locality'})
holder_list.append(city.text if city is not None else "")
state = record.find(attrs={'itemprop': 'addressRegion'})
holder_list.append(state.text if state is not None else "")
zip_code = record.find(attrs={'itemprop': 'postalCode'})
holder_list.append(zip_code.text if zip_code is not None else "")
return holder_list
# Main program
def main():
for search_term, search_location in itertools.product(search_terms, search_locations):
i = 0
while True:
i += 1
url = url(search_term, search_location, i)
r = requests.get(url)
soup = BeautifulSoup(r.text, "html.parser")
main = soup.find(attrs={'class': 'search-results organic'})
page_nav = soup.find(attrs={'class': 'pagination'})
records = main.find_all(attrs={'class': 'info'})
for record in records:
answer_list.append(find_contact_info(record))
if not page_nav.find(attrs={'class': 'next ajax-page'}):
csv_file = "YP_" + search_term + "_" + search_location + ".csv"
write_to_csv(csv_file, csv_columns, answer_list) # output data to csv file
break
if __name__ == '__main__':
main()
提前感谢您花时间阅读这篇长篇文章/回复:)
答案 0 :(得分:0)
这样的事情
try:
req = ..
if req.status_code == 503:
pass
elif ..:
pass
else:
do something when request succeeds
except ConnectionError:
pass
答案 1 :(得分:0)
您可以尝试
try:
#do something
except requests.exceptions.ConnectionError as exception:
#handle the newConnectionError exception
except Exception as exception:
#handle any exception
答案 2 :(得分:0)
我一直在做类似的事情,这对我来说(主要是):
# For handling the requests to the webpages
import requests
from requests_negotiate_sspi import HttpNegotiateAuth
# Test results, 1 record per URL to test
w = open(r'C:\Temp\URL_Test_Results.txt', 'w')
# For errors only
err = open(r'C:\Temp\URL_Test_Error_Log.txt', 'w')
print('Starting process')
def test_url(url):
# Test the URL and write the results out to the log files.
# Had to disable the warnings, by turning off the verify option, a warning is generated as the
# website certificates are not checked, so results could be "bad". The main site throws errors
# into the log for each test if we don't turn it off though.
requests.packages.urllib3.disable_warnings()
headers={'User-Agent': 'Mozilla/5.0 (X11; OpenBSD i386) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36'}
print('Testing ' + url)
# Try the website link, check for errors.
try:
response = requests.get(url, auth=HttpNegotiateAuth(), verify=False, headers=headers, timeout=5)
except requests.exceptions.HTTPError as e:
print('HTTP Error')
print(e)
w.write('HTTP Error, check error log' + '\n')
err.write('HTTP Error' + '\n' + url + '\n' + e + '\n' + '***********' + '\n' + '\n')
except requests.exceptions.ConnectionError as e:
# some external sites come through this, even though the links work through the browser
# I suspect that there's some blocking in place to prevent scraping...
# I could probably work around this somehow.
print('Connection error')
print(e)
w.write('Connection error, check error log' + '\n')
err.write(str('Connection Error') + '\n' + url + '\n' + str(e) + '\n' + '***********' + '\n' + '\n')
except requests.exceptions.RequestException as e:
# Any other error types
print('Other error')
print(e)
w.write('Unknown Error' + '\n')
err.write('Unknown Error' + '\n' + url + '\n' + e + '\n' + '***********' + '\n' + '\n')
else:
# Note that a 404 is still 'successful' as we got a valid response back, so it comes through here
# not one of the exceptions above.
response = requests.get(url, auth=HttpNegotiateAuth(), verify=False)
print(response.status_code)
w.write(str(response.status_code) + '\n')
print('Success! Response code:', response.status_code)
print('========================')
test_url('https://stackoverflow.com/')
我目前在某些网站超时方面仍存在一些问题,您可以按照我的尝试在此处解决: 2 Valid URLs, requests.get() fails on 1 but not the other. Why?