Python scrapy没有调用SitemapSpider回调

时间:2016-11-09 01:59:51

标签: python xml web-scraping scrapy web-crawler

我在这里阅读了SitemapSpider类的文档:https://scrapy.readthedocs.io/en/latest/topics/spiders.html#sitemapspider

这是我的代码:

class CurrentHarvestSpider(scrapy.spiders.SitemapSpider):
    name = "newegg"
    allowed_domains = ["newegg.com"]
    sitemap_urls = ['http://www.newegg.com/Siteindex_USA.xml']
    # if I comment this out, then the parse function should be called by default for every link, but it doesn't
    sitemap_rules = [('/Product', 'parse_product_url'), ('product','parse_product_url')]
    sitemap_follow = ['/newegg_sitemap_product', '/Product']

    def parse(self, response):
        with open("/home/dan/debug/newegg_crawler.log", "a") as log:
        log.write("logging from parse " + response.url)
        self.this_function_does_not_exist()
        yield Request(response.url, callback=self.some_callback)

    def some_callback(self, response):
        with open("/home/dan/debug/newegg_crawler.log", "a") as log:
            log.write("logging from some_callback " + response.url)
        self.this_function_does_not_exist()

    def parse_product_url(self, response):
        with open("/home/dan/debug/newegg_crawler.log ", "a") as log:
            log.write("logging from parse_product_url" + response.url)
        self.this_function_does_not_exist()

这可以在安装了scrapy的情况下成功运行 运行pip install scrapy以获取scrapy并从工作目录中使用scrapy crawl newegg执行。

我的问题是,为什么没有调用任何这些回调?该文档声称应调用sitemap_rules中定义的回调。如果我将其评论出来,则默认情况下应调用parse(),但它仍然无法调用。文档只是100%错误吗?我正在检查我设置的这个日志文件,并且没有写入任何内容。我甚至将文件的权限设置为777.此外,我调用一个不存在的函数,该函数应该导致错误以证明函数没有被调用,但是没有发生错误。我做错了什么?

1 个答案:

答案 0 :(得分:2)

当我运行你的蜘蛛时,这就是我在控制台上得到的:

$ scrapy runspider op.py 
2016-11-09 21:34:51 [scrapy] INFO: Scrapy 1.2.1 started (bot: scrapybot)
(...)
2016-11-09 21:34:51 [scrapy] INFO: Spider opened
2016-11-09 21:34:51 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-11-09 21:34:51 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-11-09 21:34:51 [scrapy] DEBUG: Crawled (200) <GET http://www.newegg.com/Siteindex_USA.xml> (referer: None)
2016-11-09 21:34:53 [scrapy] DEBUG: Crawled (200) <GET http://www.newegg.com/Sitemap/USA/newegg_sitemap_product01.xml.gz> (referer: http://www.newegg.com/Siteindex_USA.xml)
2016-11-09 21:34:53 [scrapy] ERROR: Spider error processing <GET http://www.newegg.com/Sitemap/USA/newegg_sitemap_product01.xml.gz> (referer: http://www.newegg.com/Siteindex_USA.xml)
Traceback (most recent call last):
  File "/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/utils/defer.py", line 102, in iter_errback
    yield next(it)
  File "/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/offsite.py", line 29, in process_spider_output
    for x in result:
  File "/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/referer.py", line 22, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/depth.py", line 58, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/spiders/sitemap.py", line 44, in _parse_sitemap
    s = Sitemap(body)
  File "/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/utils/sitemap.py", line 17, in __init__
    rt = self._root.tag
AttributeError: 'NoneType' object has no attribute 'tag'

您可能已经注意到AttributeError例外。 所以scrapy说它在解析站点地图响应体时遇到了麻烦。

如果scrapy无法理解站点地图内容,则无法将内容解析为XML,因此无法关注任何<loc>网址,因此不会调用任何回调,因为它什么都没找到。

所以你实际上在scrapy中发现了一个错误(感谢报道):https://github.com/scrapy/scrapy/issues/2389

至于虫子本身,

不同的子站点地图,例如http://www.newegg.com/Sitemap/USA/newegg_sitemap_store01.xml.gz,作为gzip压缩.gz文件(gzip两次 - 因此HTTP响应需要被枪杀两次)被“发送”,以便正确解析为XML。

Scrapy不处理这种情况,因此打印出异常。

这是一个基本的站点地图蜘蛛试图双枪口响应:

from scrapy.utils.gz import gunzip
import scrapy


class CurrentHarvestSpider(scrapy.spiders.SitemapSpider):
    name = "newegg"
    allowed_domains = ["newegg.com"]
    sitemap_urls = ['http://www.newegg.com/Siteindex_USA.xml']

    def parse(self, response):
        self.logger.info('parsing %r' % response.url)

    def _get_sitemap_body(self, response):
        body = super(CurrentHarvestSpider, self)._get_sitemap_body(response)
        self.logger.debug("body[:32]: %r" % body[:32])
        try:
            body_unzipped_again = gunzip(body)
            self.logger.debug("body_unzipped_again[:32]: %r" % body_unzipped_again[:100])
            return body_unzipped_again
        except:
            pass
        return body

这个日志显示newegg的.xml.gz站点地图确实需要两次枪杀:

$ scrapy runspider spider.py 
2016-11-09 13:10:56 [scrapy] INFO: Scrapy 1.2.1 started (bot: scrapybot)
(...)
2016-11-09 13:10:56 [scrapy] INFO: Spider opened
2016-11-09 13:10:56 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-11-09 13:10:56 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-11-09 13:10:57 [scrapy] DEBUG: Crawled (200) <GET http://www.newegg.com/Siteindex_USA.xml> (referer: None)
2016-11-09 13:10:57 [newegg] DEBUG: body[:32]: '\xef\xbb\xbf<?xml version="1.0" encoding='
2016-11-09 13:10:57 [scrapy] DEBUG: Crawled (200) <GET http://www.newegg.com/Sitemap/USA/newegg_sitemap_store01.xml.gz> (referer: http://www.newegg.com/Siteindex_USA.xml)
2016-11-09 13:10:57 [newegg] DEBUG: body[:32]: '\x1f\x8b\x08\x08\xda\xef\x1eX\x00\x0bnewegg_sitemap_store01'
2016-11-09 13:10:57 [newegg] DEBUG: body_unzipped_again[:32]: '\xef\xbb\xbf<?xml version="1.0" encoding="utf-8"?><urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"'
2016-11-09 13:10:57 [scrapy] DEBUG: Filtered duplicate request: <GET http://www.newegg.com/Hubs/SubCategory/ID-26> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
2016-11-09 13:10:59 [scrapy] DEBUG: Crawled (200) <GET http://www.newegg.com/Sitemap/USA/newegg_sitemap_product15.xml.gz> (referer: http://www.newegg.com/Siteindex_USA.xml)
2016-11-09 13:10:59 [newegg] DEBUG: body[:32]: '\x1f\x8b\x08\x08\xe3\xfa\x1eX\x00\x0bnewegg_sitemap_product'
2016-11-09 13:10:59 [newegg] DEBUG: body_unzipped_again[:32]: '\xef\xbb\xbf<?xml version="1.0" encoding="utf-8"?><urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"'
(...)
2016-11-09 13:11:02 [scrapy] DEBUG: Crawled (200) <GET http://www.newegg.com/Product/Product.aspx?Item=9SIA04Y0766512> (referer: http://www.newegg.com/Sitemap/USA/newegg_sitemap_product15.xml.gz)
(...)
2016-11-09 13:11:02 [newegg] INFO: parsing 'http://www.newegg.com/Product/Product.aspx?Item=9SIA04Y0766512'
(...)