scrapy crawl在命令中运行良好,但在从脚本运行时有些担心

时间:2017-08-05 04:06:09

标签: python scrapy scrapy-spider

我在scrapy方面遇到了一些问题。当我运行命令scrapy crawl album -o test.xml时,蜘蛛效果很好。但是当我从脚本中爬行时,我会给蜘蛛提供一个不同的 start_urls,但是使用命令获得相同的 reusult。这两个网址都可用。这是我写的代码。请指出我做错了什么,谢谢。

蜘蛛文件 xiami_scrapy.py

import scrapy
empty_referer = {
    'Referer': ''
}

class AlbumSpider(scrapy.Spider):
    name = 'album'
    start_urls = [
        'http://www.xiami.com/artist/album-eJlX61793',
    ]
    artist = 'giga'

    def __init__(self, url=None, artist=None, *args, **kwargs):
        super(AlbumSpider, self).__init__(*args, **kwargs)
        if artist is not None:
            self.artist = artist
        if url is not None:
            self.start_urls = [url]

    def parse(self, response):
        for album in response.css('.album_item100_thread'):
            yield {
                'artist': self.artist,
                'title': album.css('.name>a>strong::text').extract_first(),
                'fav_count': album.css('.fav_c_ico::text').extract_first(),
                'star_rating': album.css('.album_rank>em::text').extract_first(),
                'release_date': response.css('.company>a::text')[1].extract().strip(),
                'company': album.css('.company>a::text')[0].extract(),
                'url': album.css('.name>a::attr(href)').extract_first(),
            }

        next_page = response.css('.p_redirect_l::attr(href)').extract_first()
        if next_page is not None:
            next_page = response.urljoin(next_page)
            yield scrapy.Request(next_page, headers=empty_referer, callback=self.parse)

脚本文件 test.py

from twisted.internet import reactor, defer
from scrapy.crawler import CrawlerRunner
from xiamiscrapy.spiders.xiami_scrapy import AlbumSpider
from scrapy.utils.log import configure_logging

configure_logging({'LOG_FORMAT': '%(levelname)s: %(message)s'})
runner = CrawlerRunner()

@defer.inlineCallbacks
def crawl():
    spider = AlbumSpider(url='http://www.xiami.com/artist/album-bzMAng64c0a',artist='reol')
    yield runner.crawl(spider)
    reactor.stop()

crawl()
reactor.run()

1 个答案:

答案 0 :(得分:0)

在spider构造函数中设置start_urls的值时,请将其称为self.start_urls。但是,这样就设置了蜘蛛类的实例属性,而start_urls是一个类属性。这就是为什么它不起作用的原因。

请查看此SO question,了解如何正确执行此操作。