从具有不同设置的脚本运行2个连续的Scrapy CrawlerProcess

时间:2017-05-30 05:55:42

标签: python scrapy

我有2个不同的Scrapy蜘蛛目前正在使用以下工作:

scrapy crawl spidername -o data\whatever.json

当然我知道我可以使用脚本中的系统调用来复制该命令,但我更倾向于使用CrawlerProcess用法或任何其他使脚本工作的方法。

事情是:在this SO question(以及Scrapy文档)中读取,我必须在给CrawlerProcess构造函数的设置中设置输出文件:

process = CrawlerProcess({
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
'FEED_FORMAT': 'json',
'FEED_URI': 'data.json'
})

问题是我不希望两个蜘蛛都将数据存储到同一个输出文件中,而是两个不同的文件。因此,我的第一次尝试显然是在第一个作业完成时创建了一个具有不同设置的新CrawlerProcess

session_date_format = '%Y%m%d'
session_date = datetime.now().strftime(session_date_format)

try:
    process = CrawlerProcess({
        'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
        'FEED_FORMAT': 'json',
        'FEED_URI': os.path.join('data', 'an_origin', '{}.json'.format(session_date)),
        'DOWNLOAD_DELAY': 3,
        'LOG_STDOUT': True,
        'LOG_FILE': 'scrapy_log.txt',
        'ROBOTSTXT_OBEY': False,
        'RETRY_ENABLED': True,
        'RETRY_HTTP_CODES': [500, 503, 504, 400, 404, 408],
        'RETRY_TIMES': 5
    })
    process.crawl(MyFirstSpider)
    process.start()  # the script will block here until the crawling is finished
except Exception as e:
    print('ERROR while crawling: {}'.format(e))
else:
    print('Data successfuly crawled')

time.sleep(3)  # Wait 3 seconds

try:
    process = CrawlerProcess({
        'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
        'FEED_FORMAT': 'json',
        'FEED_URI': os.path.join('data', 'other_origin', '{}.json'.format(session_date)),
        'DOWNLOAD_DELAY': 3,
        'LOG_STDOUT': True,
        'LOG_FILE': 'scrapy_log.txt',
        'ROBOTSTXT_OBEY': False,
        'RETRY_ENABLED': True,
        'RETRY_HTTP_CODES': [500, 503, 504, 400, 404, 408],
        'RETRY_TIMES': 5
    })
    process.crawl(MyOtherSpider)
    process.start()  # the script will block here until the crawling is finished
except Exception as e:
    print('ERROR while crawling: {}'.format(e))
else:
    print('Data successfuly crawled')

当我这样做时,第一个Crawler按预期工作。但是,第二个创建一个空输出文件并失败。如果我将第二个CrawlerProcess存储到另一个变量(例如process2),也会发生这种情况。显然,我尝试改变蜘蛛的顺序来检查这是否是特定蜘蛛的问题,但是失败的那个总是第二位的。

如果我检查日志文件,在第一个作业完成后,似乎启动了2个Scrapy机器人,所以可能会发生一些奇怪的事情:

2017-05-29 23:51:41 [scrapy.extensions.feedexport] INFO: Stored json feed (2284 items) in: data\one_origin\20170529.json
2017-05-29 23:51:41 [scrapy.core.engine] INFO: Spider closed (finished)
2017-05-29 23:51:41 [stdout] INFO: Data successfuly crawled
2017-05-29 23:51:44 [scrapy.utils.log] INFO: Scrapy 1.3.2 started (bot: scrapybot)
2017-05-29 23:51:44 [scrapy.utils.log] INFO: Scrapy 1.3.2 started (bot: scrapybot)
2017-05-29 23:51:44 [scrapy.utils.log] INFO: Overridden settings: {'LOG_FILE': 'scrapy_output.txt', 'FEED_FORMAT': 'json', 'FEED_URI': 'data\\other_origin\\20170529.json', 'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)', 'LOG_STDOUT': True, 'RETRY_TIMES': 5, 'RETRY_HTTP_CODES': [500, 503, 504, 400, 404, 408], 'DOWNLOAD_DELAY': 3}
2017-05-29 23:51:44 [scrapy.utils.log] INFO: Overridden settings: {'LOG_FILE': 'scrapy_output.txt', 'FEED_FORMAT': 'json', 'FEED_URI': 'data\\other_origin\\20170529.json', 'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)', 'LOG_STDOUT': True, 'RETRY_TIMES': 5, 'RETRY_HTTP_CODES': [500, 503, 504, 400, 404, 408], 'DOWNLOAD_DELAY': 3}
...
2017-05-29 23:51:44 [scrapy.core.engine] INFO: Spider opened
2017-05-29 23:51:44 [scrapy.core.engine] INFO: Spider opened
2017-05-29 23:51:44 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-05-29 23:51:44 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-05-29 23:51:44 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6024
2017-05-29 23:51:44 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6024
2017-05-29 23:51:44 [stdout] INFO: ERROR while crawling:
2017-05-29 23:51:44 [stdout] INFO: ERROR while crawling:

知道发生了什么以及如何解决这个问题?

2 个答案:

答案 0 :(得分:1)

每个蜘蛛具有不同的设置。基于PracticesCore API

import os
from twisted.internet import reactor
from scrapy.crawler import CrawlerRunner, Crawler
from scrapy.utils.log import configure_logging
from scrapy.exceptions import DropItem

from spiders import Spider1, Spider2

runner = CrawlerRunner()

def crawl(spider, settings):
    global runner

    crawler = Crawler(spider, settings=settings)
    runner.crawl(crawler)

if __name__ == "__main__":
    configure_logging()

    crawl(Spider1, settings={
        'FEED_EXPORTERS': {
            'xlsx': 'scrapy_xlsx.XlsxItemExporter',
        },
        'DOWNLOAD_DELAY': 1,
        'FEED_FORMAT': 'xlsx',
        'FEED_URI': 'spider1.xlsx'
    })
    crawl(Spider2, settings={
        'DOWNLOAD_DELAY': 1,
        'FEED_FORMAT': 'json',
        'FEED_URI': 'spider2.json'
    })

    d = runner.join()
    d.addBoth(lambda _: reactor.stop())

    reactor.run()

答案 1 :(得分:-1)

process.start()

在脚本的最后,你的两个刮刀将同时运行。

PS:我已经做过这样的事了。

以下是我分享的一些代码。

batches = 10
while batches > 0:
    process = CrawlerProcess( SETTINGS HERE )
    process.crawl(AmazonSpider())
    batches = batches - 1

process.start() # then finally run your Spiders.