刚刚开始使用scrapy,我正在尝试通过整个数据库逐页执行通用的搜索引擎并获取具有我需要的某些链接,但是当我尝试访问时,我收到此错误下一页。不完全确定如何进入下一页,对于正确方法的帮助表示感谢!
这是我的代码:
class TestSpider(scrapy.Spider):
name = "PLC"
allowed_domains = ["exploit-db.com"]
start_urls = [
"https://www.exploit-db.com/local/"
]
def parse(self, response):
filename = response.url.split("/")[-2] + '.html'
links = response.xpath('//tr/td[5]/a/@href').extract()
description = response.xpath('//tr/td[5]/a[@href]/text()').extract()
for data, link in zip(description, links):
if "PLC" in data:
with open(filename, "a") as f:
f.write(data+'\n')
f.write(link+'\n\n')
f.close()
else:
pass
next_page = response.xpath('//div[@class="pagination"][1]//a/@href').extract()
if next_page:
url = response.urljoin(next_page[0])
yield scrapy.Request(url, callback=self.parse)
但我在控制台上收到此错误(?)
2016-06-08 16:05:21 [scrapy] INFO: Enabled item pipelines:
[]
2016-06-08 16:05:21 [scrapy] INFO: Spider opened
2016-06-08 16:05:21 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-06-08 16:05:21 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-06-08 16:05:22 [scrapy] DEBUG: Crawled (200) <GET https://www.exploit-db.com/robots.txt> (referer: None)
2016-06-08 16:05:22 [scrapy] DEBUG: Crawled (200) <GET https://www.exploit-db.com/local/> (referer: None)
2016-06-08 16:05:23 [scrapy] DEBUG: Crawled (200) <GET https://www.exploit-db.com/local/?order_by=date&order=desc&pg=2> (referer: https://www.exploit-db.com/local/)
2016-06-08 16:05:23 [scrapy] DEBUG: Crawled (200) <GET https://www.exploit-db.com/local/?order_by=date&order=desc&pg=1> (referer: https://www.exploit-db.com/local/?order_by=date&order=desc&pg=2)
2016-06-08 16:05:23 [scrapy] DEBUG: Filtered duplicate request: <GET https://www.exploit-db.com/local/?order_by=date&order=desc&pg=2> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
2016-06-08 16:05:23 [scrapy] INFO: Closing spider (finished)
2016-06-08 16:05:23 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 1162,
'downloader/request_count': 4,
'downloader/request_method_count/GET': 4,
'downloader/response_bytes': 40695,
'downloader/response_count': 4,
'downloader/response_status_count/200': 4,
'dupefilter/filtered': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 6, 8, 8, 5, 23, 514161),
'log_count/DEBUG': 6,
'log_count/INFO': 7,
'request_depth_max': 3,
'response_received_count': 4,
'scheduler/dequeued': 3,
'scheduler/dequeued/memory': 3,
'scheduler/enqueued': 3,
'scheduler/enqueued/memory': 3,
'start_time': datetime.datetime(2016, 6, 8, 8, 5, 21, 561678)}
2016-06-08 16:05:23 [scrapy] INFO: Spider closed (finished)
它无法抓取下一页,并希望解释为什么T.T
答案 0 :(得分:0)
您可以在请求中使用参数dont_filter = True:
if next_page:
url = response.urljoin(next_page[0])
yield scrapy.Request(url, callback=self.parse, dont_filter=True)
但是你会遇到一个无限循环,因为似乎你的xpath正在检索相同的链接两次(检查每页上的寻呼机,因为.pagination的第二个元素可能并不总是“下一页”。
next_page = response.xpath('//div[@class="pagination"][1]//a/@href').extract()
此外,如果他们开始使用bootstrap或类似的东西,他们将类btn btn-default添加到链接?
我建议使用
selector.css(".pagination").xpath('.//a/@href')
代替