我正在使用JOBDIR
(参见https://doc.scrapy.org/en/latest/topics/jobs.html)运行刮刀,以便可以暂停和恢复抓取。刮刀已经成功运行了一段时间,但现在当我抓住蜘蛛时,我得到了以下结尾的日志:
scraper_1 | 2017-06-21 14:53:10 [scrapy.middleware] INFO: Enabled item pipelines:
scraper_1 | ['scrapy.pipelines.images.ImagesPipeline',
scraper_1 | 'scrapy.pipelines.files.FilesPipeline']
scraper_1 | 2017-06-21 14:53:10 [scrapy.core.engine] INFO: Spider opened
scraper_1 | 2017-06-21 14:53:10 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
scraper_1 | 2017-06-21 14:53:10 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
scraper_1 | 2017-06-21 14:53:12 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.apkmirror.com/sitemap_index.xml> (referer: None)
scraper_1 | 2017-06-21 14:53:13 [scrapy.core.scraper] ERROR: Spider error processing <GET http://www.apkmirror.com/sitemap_index.xml> (referer: None)
scraper_1 | Traceback (most recent call last):
scraper_1 | File "/usr/local/lib/python3.6/site-packages/scrapy/utils/defer.py", line 102, in iter_errback
scraper_1 | yield next(it)
scraper_1 | GeneratorExit
scraper_1 | 2017-06-21 14:53:13 [scrapy.core.engine] INFO: Closing spider (closespider_errorcount)
scraper_1 | Exception ignored in: <generator object iter_errback at 0x7f4cc3a754c0>
scraper_1 | RuntimeError: generator ignored GeneratorExit
scraper_1 | 2017-06-21 14:53:13 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
scraper_1 | {'downloader/request_bytes': 306,
scraper_1 | 'downloader/request_count': 1,
scraper_1 | 'downloader/request_method_count/GET': 1,
scraper_1 | 'downloader/response_bytes': 2498,
scraper_1 | 'downloader/response_count': 1,
scraper_1 | 'downloader/response_status_count/200': 1,
scraper_1 | 'finish_reason': 'closespider_errorcount',
scraper_1 | 'finish_time': datetime.datetime(2017, 6, 21, 14, 53, 13, 139012),
scraper_1 | 'log_count/DEBUG': 26,
scraper_1 | 'log_count/ERROR': 1,
scraper_1 | 'log_count/INFO': 10,
scraper_1 | 'memusage/max': 75530240,
scraper_1 | 'memusage/startup': 75530240,
scraper_1 | 'request_depth_max': 1,
scraper_1 | 'response_received_count': 1,
scraper_1 | 'scheduler/dequeued': 1,
scraper_1 | 'scheduler/dequeued/disk': 1,
scraper_1 | 'scheduler/enqueued': 1,
scraper_1 | 'scheduler/enqueued/disk': 1,
scraper_1 | 'spider_exceptions/GeneratorExit': 1,
scraper_1 | 'start_time': datetime.datetime(2017, 6, 21, 14, 53, 10, 532154)}
scraper_1 | 2017-06-21 14:53:13 [scrapy.core.engine] INFO: Spider closed (closespider_errorcount)
scraper_1 | Unhandled error in Deferred:
scraper_1 | 2017-06-21 14:53:13 [twisted] CRITICAL: Unhandled error in Deferred:
scraper_1 |
scraper_1 | 2017-06-21 14:53:13 [twisted] CRITICAL:
scraper_1 | Traceback (most recent call last):
scraper_1 | File "/usr/local/lib/python3.6/site-packages/twisted/internet/task.py", line 517, in _oneWorkUnit
scraper_1 | result = next(self._iterator)
scraper_1 | File "/usr/local/lib/python3.6/site-packages/scrapy/utils/defer.py", line 63, in <genexpr>
scraper_1 | work = (callable(elem, *args, **named) for elem in iterable)
scraper_1 | File "/usr/local/lib/python3.6/site-packages/scrapy/core/scraper.py", line 183, in _process_spidermw_output
scraper_1 | self.crawler.engine.crawl(request=output, spider=spider)
scraper_1 | File "/usr/local/lib/python3.6/site-packages/scrapy/core/engine.py", line 210, in crawl
scraper_1 | self.schedule(request, spider)
scraper_1 | File "/usr/local/lib/python3.6/site-packages/scrapy/core/engine.py", line 216, in schedule
scraper_1 | if not self.slot.scheduler.enqueue_request(request):
scraper_1 | File "/usr/local/lib/python3.6/site-packages/scrapy/core/scheduler.py", line 54, in enqueue_request
scraper_1 | if not request.dont_filter and self.df.request_seen(request):
scraper_1 | File "/usr/local/lib/python3.6/site-packages/scrapy/dupefilters.py", line 53, in request_seen
scraper_1 | self.file.write(fp + os.linesep)
scraper_1 | TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
apkmirrorscrapercompose_scraper_1 exited with code 0
似乎错误是dupefilters.py
引起的。我查看了源代码https://github.com/scrapy/scrapy/blob/master/scrapy/dupefilters.py,但到目前为止还没有找到导致此错误的原因。有什么想法吗?
更新
以下是有关如何实施蜘蛛的更多详细信息。它是SitemapSpider
如下:
import scrapy
from scrapy.spiders import SitemapSpider
from apkmirror_scraper.spiders.base_spider import BaseSpider
class ApkmirrorSitemapSpider(SitemapSpider, BaseSpider):
name = 'apkmirror'
sitemap_urls = ['http://www.apkmirror.com/sitemap_index.xml']
sitemap_rules = [(r'.*-android-apk-download/$', 'parse')]
custom_settings = {
'CLOSESPIDER_PAGECOUNT': 0,
'CLOSESPIDER_ERRORCOUNT': 1,
'CONCURRENT_REQUESTS': 32,
'CONCURRENT_REQUESTS_PER_DOMAIN': 16,
'TOR_RENEW_IDENTITY_ENABLED': True,
'TOR_ITEMS_TO_SCRAPE_PER_IDENTITY': 50,
'FEED_URI': '/scraper/apkmirror_scraper/data/apkmirror.json',
'FEED_FORMAT': 'json',
'DUPEFILTER_CLASS': 'apkmirror_scraper.dupefilters.URLDupefilter',
}
download_timeout = 60 * 15.0 # Allow 15 minutes for downloading APKs
def start_requests(self):
for url in self.sitemap_urls:
yield scrapy.Request(url, self._parse_sitemap, dont_filter=True)
parse
类中定义了BaseSpider
方法。我已按如下方式定义了自定义URLDupefilter
:
from scrapy.dupefilters import RFPDupeFilter
class URLDupefilter(RFPDupeFilter):
def request_fingerprint(self, request):
'''Simply use the URL as fingerprint. (Scrapy's default is a hash containing the request's canonicalized URL, method, body, and (optionally) headers). Omit sitemap pages, which end with ".xml".'''
if not request.url.endswith('.xml'):
return request.url
def request_seen(self, request):
'''Same as the RFPDupeFilter's request_seen method, except that a fingerprint of "None" is viewed as 'not seen' (cf. https://stackoverflow.com/questions/44370949/is-it-ok-for-scrapys-request-fingerprint-method-to-return-none).'''
fp = self.request_fingerprint(request)
if fp is None:
return False # These two lines are added to the original
if fp in self.fingerprints:
return True
self.fingerprints.add(fp)
if self.file:
self.file.write(fp + os.linesep)
然而,错误似乎来自Scrapy的内置RFPDupeFilter
类。我不明白为什么如果我将DUPEFILTER_CLASS
设置为自定义文件,仍会启用此功能?
答案 0 :(得分:1)
class URLDupefilter(RFPDupeFilter):
def request_fingerprint(self, request):
'''Simply use the URL as fingerprint. (Scrapy's default is a hash containing the request's canonicalized URL, method, body, and (optionally) headers). Omit sitemap pages, which end with ".xml".'''
if not request.url.endswith('.xml'):
return request.url
如果请求网址以None
结尾,则会返回.xml
。然后,dupefilter尝试通过将该字符串与换行符结合起来,将它认为是字符串的内容写入文件。由于它尝试将None和字符串组合在一起,因此会得到TypeError。
要解决此问题,只需返回.xml
个网页的字符串:
def request_fingerprint(self, request):
if not request.url.endswith('.xml'):
return request.url
return ''