我在尝试从各个网页抓取数据之前尝试获取一些链接,但我正在获取NotImplementedError
- 追溯到下面的内容:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 588, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/usr/lib/python2.7/dist-packages/scrapy/spiders/__init__.py", line 76, in parse
raise NotImplementedError
NotImplementedError
2017-10-13 06:03:58 [scrapy] INFO: Closing spider (finished)
2017-10-13 06:03:58 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 273,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 81464,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 10, 13, 5, 3, 58, 550062),
'log_count/DEBUG': 2,
'log_count/ERROR': 1,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'spider_exceptions/NotImplementedError': 1,
'start_time': datetime.datetime(2017, 10, 13, 5, 3, 56, 552516)}
2017-10-13 06:03:58 [scrapy] INFO: Spider closed (finished)
我试过了:
将DOWNLOAD_HANDLERS = {'s3': None,}
添加到settings.py
似乎没有做任何事情,然后我切换到crapy.Spider
到scrapy.spiders.CrawlSpider
没有抛出错误消息,但是,它也没有打印出我的{{1我认为如果设置正确的话我是否正确?我的代码如下:
final_url
所以我对这个的问题是:
# -*- coding: utf-8 -*-
from scrapy import Spider
from scrapy.http import Request
import scrapy
class Test_spider(scrapy.spiders.CrawlSpider):
name = "Spider_Testing"
allowed_domains = ["http://www.example.com/"]
start_urls = (
"http://www.example.com/followthrough",
)
def parse_links(self, response):
links = response.xpath('//form/table/tr/td/table//a[div]/@href').extract()
for link in links:
base_url = "http://www.example.com/followthrough" # the full addresss after/ is slightly different than start urls but that should not matter?
final_url = response.urljoin(base_url, links)
print(final_url) #test 1
print(Request(final_url, callback=self.parse_final)) #test 2
yield Request(final_url, callback=self.parse_final)
def parse_final(self, response):
pass
的测试打印是否正确? - 我在想#1不是#2 答案 0 :(得分:1)
错误来自缺少parse
方法。由于您未实现start_requests
,因此其默认行为为:
默认实现生成
Request(url, dont_filter=True)
对于start_urls中的每个网址。
它没有设置回调参数,所以它会尝试默认调用parse
:
如果请求没有指定回调,则使用spider的parse()方法 将会被使用。请注意,如果在处理期间引发异常, 而是调用errback。
您可以通过实施starts_requests来修复它,并指定回调参数:
def start_requests:
yield Request(start_url, callback=parse_links)
<强>更新强>
response.urljoin(url)
仅收到一个论点:
通过将Response的url与a组合来构造绝对url 可能的相对网址。
您应该使用response.urljoin(link)
或urlparse.urljoin(base_url, link)
。并确保这里的链接是相对网址。
<强> UPDATE2:强>
您可以添加以下代码并运行它:
if __name__ == '__main__':
from scrapy.crawler import CrawlerProcess
process = CrawlerProcess({
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
})
process.crawl(Test_spider)
process.start()
它允许您从脚本运行scrapy,因此您可以使用IDE中的ipdb
或调试工具来进入它。