从Python脚本内部运行scrapy - CSV导出器不起作用

时间:2013-07-20 10:23:40

标签: python python-2.7 export twisted scrapy

当我从命令行运行它时,我的刮刀工作正常,但是当我尝试在python脚本中运行它时(使用Twisted方法概述here)它不会输出两个CSV文件它通常会。我有一个管道来创建和填充这些文件,其中一个使用CsvItemExporter(),另一个使用writeCsvFile()。这是代码:

class CsvExportPipeline(object):

    def __init__(self):
        self.files = {}

    @classmethod
    def from_crawler(cls, crawler):
        pipeline = cls()
        crawler.signals.connect(pipeline.spider_opened, signals.spider_opened)
        crawler.signals.connect(pipeline.spider_closed, signals.spider_closed)
        return pipeline

    def spider_opened(self, spider):
        nodes = open('%s_nodes.csv' % spider.name, 'w+b')
        self.files[spider] = nodes
        self.exporter1 = CsvItemExporter(nodes, fields_to_export=['url','name','screenshot'])
        self.exporter1.start_exporting()

        self.edges = []
        self.edges.append(['Source','Target','Type','ID','Label','Weight'])
        self.num = 1

    def spider_closed(self, spider):
        self.exporter1.finish_exporting()
        file = self.files.pop(spider)
        file.close()

        writeCsvFile(getcwd()+r'\edges.csv', self.edges)

    def process_item(self, item, spider):
        self.exporter1.export_item(item)

        for url in item['links']:
            self.edges.append([item['url'],url,'Directed',self.num,'',1])
            self.num += 1
        return item

这是我的文件结构:

SiteCrawler/      # the CSVs are normally created in this folder
    runspider.py  # this is the script that runs the scraper
    scrapy.cfg
    SiteCrawler/
        __init__.py
        items.py
        pipelines.py
        screenshooter.py
        settings.py
        spiders/
            __init__.py
            myfuncs.py
            sitecrawler_spider.py

刮刀似乎在所有其他方面正常运行。命令行末尾的输出表明爬行了预期的页数,蜘蛛似乎已正常完成。我没有收到任何错误消息。

---- 编辑: ----

将打印语句和语法错误插入管道无效,因此似乎忽略了管道。为什么会这样?

以下是运行scraper(runspider.py)的脚本的代码:

from twisted.internet import reactor

from scrapy import log, signals
from scrapy.crawler import Crawler
from scrapy.settings import Settings
from scrapy.xlib.pydispatch import dispatcher
import logging

from SiteCrawler.spiders.sitecrawler_spider import MySpider

def stop_reactor():
    reactor.stop()

dispatcher.connect(stop_reactor, signal=signals.spider_closed)
spider = MySpider()
crawler = Crawler(Settings())
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start(loglevel=logging.DEBUG)
log.msg('Running reactor...')
reactor.run()  # the script will block here until the spider is closed
log.msg('Reactor stopped.')   

2 个答案:

答案 0 :(得分:1)

将“from scrapy.settings import Settings”替换为“from scrapy.utils.project import get_project_settings as Settings”修复了问题。

找到解决方案here。没有提供解决方案的解释。

alecxe提供了an example如何在Python脚本中运行Scrapy。

编辑:

更详细地阅读了alecxe的帖子后,我现在可以看到“来自scrapy.settings import Settings”和“from scrapy.utils.project import get_project_settings as Settings”之间的区别。后者允许您使用项目的设置文件,而不是defualt设置文件。阅读alecxe的帖子(链接到上面)了解更多细节。

答案 1 :(得分:0)

在我的项目中,我使用os.system

在另一个python脚本中调用scrapy代码
import os
os.chdir('/home/admin/source/scrapy_test')
command = "scrapy crawl test_spider -s FEED_URI='file:///home/admin/scrapy/data.csv' -s LOG_FILE='/home/admin/scrapy/scrapy_test.log'"
return_code = os.system(command)
print 'done'