我目前正在寻找将报废数据导出到名称基于蜘蛛名称的文件中。
这是我的pipelines.py:
from mydatacrowd.models import Datacrowd
from scrapy.contrib.exporter import CsvItemExporter
class CsvExportPipeline(object):
def _init_(self):
self.files = {}
@classmethod
def from_crawlers(cls, crawler):
pipeline = cls()
crawler.signal.connect(pipeline.spider_opened, signal.spider_opened)
crawler.signal.connect(pipeline.spider_closed, signal.spider_closed)
return pipeline
def spider_opened(self, spider):
print 'Hello world!'
print spider.name
file = open('%s.csv' % spider.name, 'w+b')
self.files[spider] = file
self.exporter = CsvItemExporter(file)
self.exporter.start_exporting()
def spider_closed(self, spider):
self.exporter.finish_exporting()
file = self.files.pop(spider)
file.close()
def process_item(self, item, spider):
item.save()
return item
以下是我的settings.py:
的一部分...
ITEM_PIPELINES = {
'datacrowdscrapy.pipelines.CsvExportPipeline': 1000,
}
FEED_FORMAT = 'csv'
FEED_EXPORTERS = {
'csv': 'datacrowdscrapy.feedexport.CsvScrapperExporter'
}
...
这是我的feedexport.py:
from scrapy.conf import settings
from scrapy.contrib.exporter import CsvItemExporter
class CsvScrapperExporter(CsvItemExporter):
def _init_(self, *args, **kwargs):
kwargs['fields_to_export'] = settings.getlist('EXPORT_FIELDS') or None
kwargs['encoding'] = settings.get('EXPORT_ENCODING', 'utf-8')
super(CsvScrapperExporter, self).__init__(*args, **kwargs)
没有创建文件,没有显示错误,日志中的'Hello world'永远不会出现,我缺少什么?
谢谢!
编辑:
我的settings.py中没有FEED_URI参数,这有帮助吗?
答案 0 :(得分:1)
查看scrapy crawl命令源似乎scrapy只会读取FEED_EXPORTERS设置,如果你提供这样的输出选项:
scrapy crawl <spider_name> -o csv
来自scrapy / commands / crawl.py的:
if opts.output:
...
valid_output_formats = self.settings['FEED_EXPORTERS'].keys() +
self.settings['FEED_EXPORTERS_BASE'].keys()
....
self.settings.overrides['FEED_FORMAT'] = opts.output_format