我的scrapy代码不起作用,我不明白为什么。我刚开始拼抢,所以我现在并不关心哪个网站。我知道这个问题不涉及我选择的网址。
这是我的代码:
import scrapy
class Twitter(scrapy.Spider):
name = "twitter_following"
start_urls = ['https://www.digitalocean.com']
答案 0 :(得分:0)
$ cat so.py
import scrapy
class Twitter(scrapy.Spider):
name = "twitter_following"
start_urls = ['https://www.digitalocean.com']
$ scrapy runspider so.py
2017-07-17 14:55:24 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: scrapybot)
(...)
2017-07-17 14:55:24 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.digitalocean.com> (referer: None)
2017-07-17 14:55:24 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.digitalocean.com> (referer: None)
Traceback (most recent call last):
File "/home/paul/.virtualenvs/scrapy14/lib/python3.6/site-packages/twisted/internet/defer.py", line 653, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/home/paul/.virtualenvs/scrapy14/lib/python3.6/site-packages/scrapy/spiders/__init__.py", line 90, in parse
raise NotImplementedError
NotImplementedError
2017-07-17 14:55:25 [scrapy.core.engine] INFO: Closing spider (finished)
2017-07-17 14:55:25 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 218,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 18321,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 7, 17, 12, 55, 25, 20602),
'log_count/DEBUG': 2,
'log_count/ERROR': 1,
'log_count/INFO': 7,
'memusage/max': 47943680,
'memusage/startup': 47943680,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'spider_exceptions/NotImplementedError': 1,
'start_time': datetime.datetime(2017, 7, 17, 12, 55, 24, 131159)}
2017-07-17 14:55:25 [scrapy.core.engine] INFO: Spider closed (finished)
您需要定义parse
callback:这是Request
个对象中未引用回调时的默认回调。
$ cat so.py
import scrapy
class Twitter(scrapy.Spider):
name = "twitter_following"
start_urls = ['https://www.digitalocean.com']
def parse(self, response):
self.logger.debug('callback "parse": got response %r' % response)
$ scrapy runspider so.py
2017-07-17 14:58:15 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: scrapybot)
(...)
2017-07-17 14:58:16 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.digitalocean.com> (referer: None)
2017-07-17 14:58:16 [twitter_following] DEBUG: callback "parse": got response <200 https://www.digitalocean.com>
2017-07-17 14:58:16 [scrapy.core.engine] INFO: Closing spider (finished)
2017-07-17 14:58:16 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 218,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 18321,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 7, 17, 12, 58, 16, 482262),
'log_count/DEBUG': 3,
'log_count/INFO': 7,
'memusage/max': 47771648,
'memusage/startup': 47771648,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2017, 7, 17, 12, 58, 15, 609825)}
2017-07-17 14:58:16 [scrapy.core.engine] INFO: Spider closed (finished)