我在python中写了一个刮刀。不幸的是,当刮刀遇到404
或505
页面时,它就会停止工作。如何在循环中跳过这些页面以避免此问题?
这是我的代码:
import requests
from bs4 import BeautifulSoup
import time
c = int(40622)
a = 10
for a in range(10):
url = 'https://example.com/rockery/'+str(c)
c = int(c) + 1
print('-------------------------------------------------------------------------------------')
print(url)
print(c)
time.sleep(5)
response = requests.get(url)
html = response.content
soup = BeautifulSoup(html, "html.parser")
name = soup.find('a', attrs={'class': 'name-hyperlink'})
name_final = name.text
name_details = soup.find('div', attrs={'class': 'post-text'})
name_details_final = name_details.text
name_taglist = soup.find('div', attrs={'class': 'post-taglist'})
name_taglist_final = name_taglist.text
name_accepted_tmp = soup.find('div', attrs={'class': 'accepted-name'})
name_accepted = name_accepted_tmp.find('div', attrs={'class': 'post-text'})
name_accepted_final = name_accepted.text
print('q_title=',name_final,'\nq_details=',name_details,'\nq_answer=',name_accepted)
print('-------------------------------------------------------------------------------------')
以下是我在点击404
或505
页面时遇到的错误:
错误
追踪(最近一次呼叫最后一次):
文件" scrab.py",第18行,
name_final = name.text
AttributeError:' NoneType'对象没有属性' text'
答案 0 :(得分:4)
检查响应的状态代码,如果它不是200(ok),你可以通过continue
语句转到循环中的下一个迭代来跳过它:
response = requests.get(url)
if response.status_code != 200: #could also check == requests.codes.ok
continue