我想为我在蜘蛛的start_urls中设置的每个网址创建单独的输出文件,或者以某种方式想要将输出文件拆分为开头网址。
以下是我蜘蛛的start_urls
start_urls = ['http://www.dmoz.org/Arts/', 'http://www.dmoz.org/Business/', 'http://www.dmoz.org/Computers/']
我想创建单独的输出文件,如
Arts.xml
Business.xml
Computers.xml
我不知道该怎么做。我想通过在项目管道类的spider_opened方法中实现以下内容来实现这一点,
import re
from scrapy import signals
from scrapy.contrib.exporter import XmlItemExporter
class CleanDataPipeline(object):
def __init__(self):
self.cnt = 0
self.filename = ''
@classmethod
def from_crawler(cls, crawler):
pipeline = cls()
crawler.signals.connect(pipeline.spider_opened, signals.spider_opened)
crawler.signals.connect(pipeline.spider_closed, signals.spider_closed)
return pipeline
def spider_opened(self, spider):
referer_url = response.request.headers.get('referer', None)
if referer_url in spider.start_urls:
catname = re.search(r'/(.*)$', referer_url, re.I)
self.filename = catname.group(1)
file = open('output/' + str(self.cnt) + '_' + self.filename + '.xml', 'w+b')
self.exporter = XmlItemExporter(file)
self.exporter.start_exporting()
def spider_closed(self, spider):
self.exporter.finish_exporting()
#file.close()
def process_item(self, item, spider):
self.cnt = self.cnt + 1
self.spider_closed(spider)
self.spider_opened(spider)
self.exporter.export_item(item)
return item
我试图找到start_urls列表中每个被删除项目的引用URL。如果在start_urls中找到referer url,则将使用该referer url创建文件名。 但问题是如何在spider_opened()方法中访问响应对象。如果我可以在那里访问它,我可以根据它创建文件。
有什么帮助找到执行此操作的方法吗?提前谢谢!
[编辑]
通过更改我的管道代码解决了我的问题。
import re
from scrapy import signals
from scrapy.contrib.exporter import XmlItemExporter
class CleanDataPipeline(object):
def __init__(self):
self.filename = ''
self.exporters = {}
@classmethod
def from_crawler(cls, crawler):
pipeline = cls()
crawler.signals.connect(pipeline.spider_opened, signals.spider_opened)
crawler.signals.connect(pipeline.spider_closed, signals.spider_closed)
return pipeline
def spider_opened(self, spider, fileName = 'default.xml'):
self.filename = fileName
file = open('output/' + self.filename, 'w+b')
exporter = XmlItemExporter(file)
exporter.start_exporting()
self.exporters[fileName] = exporter
def spider_closed(self, spider):
for exporter in self.exporters.itervalues():
exporter.finish_exporting()
def process_item(self, item, spider):
fname = 'default'
catname = re.search(r'http://www.dmoz.org/(.*?)/', str(item['start_url']), re.I)
if catname:
fname = catname.group(1)
self.curFileName = fname + '.xml'
if self.filename == 'default.xml':
if os.path.isfile('output/' + self.filename):
os.rename('output/' + self.filename, 'output/' + self.curFileName)
exporter = self.exporters['default.xml']
del self.exporters['default.xml']
self.exporters[self.curFileName] = exporter
self.filename = self.curFileName
if self.filename != self.curFileName and not self.exporters.get(self.curFileName):
self.spider_opened(spider, self.curFileName)
self.exporters[self.curFileName].export_item(item)
return item
还在spider中实现make_requests_from_url
,为每个项目设置start_url。
def make_requests_from_url(self, url):
request = Request(url, dont_filter=True)
request.meta['start_url'] = url
return request
答案 0 :(得分:5)
我实施了更明确的方法(未经过测试):
在settings.py
中配置可能类别的列表:
CATEGORIES = ['Arts', 'Business', 'Computers']
根据设置
定义您的start_urls
start_urls = ['http://www.dmoz.org/%s' % category for category in settings.CATEGORIES]
将category
Field
添加到Item
类
根据当前category
设置response.url
字段,例如:
def parse(self, response):
...
item['category'] = next(category for category in settings.CATEGORIES if category in response.url)
...
打开所有类别的导出器,并根据item['category']
选择要使用的导出器:
def spider_opened(self, spider):
...
self.exporters = {}
for category in settings.CATEGORIES:
file = open('output/%s.xml' % category, 'w+b')
exporter = XmlItemExporter(file)
exporter.start_exporting()
self.exporters[category] = exporter
def spider_closed(self, spider):
for exporter in self.exporters.itervalues():
exporter.finish_exporting()
def process_item(self, item, spider):
self.exporters[item['category']].export_item(item)
return item
您可能需要稍微调整一下才能使其正常工作,但我希望您有这个想法 - 将类别存储在正在处理的item
中。根据项目类别值选择要导出到的文件。
希望有所帮助。
答案 1 :(得分:1)
只要您不将其存储在项目本身中,您就无法真正了解该网址。以下解决方案适合您:
重新定义make_request_from_url
以使用您创建的每个Request
发送起始网址。您可以将其存储在meta
的{{1}}属性中。使用以下每个Request
绕过此起始网址。
一旦您决定将元素传递给管道,请从Request
希望它有所帮助。以下链接可能会有所帮助:
http://doc.scrapy.org/en/latest/topics/spiders.html#scrapy.spider.Spider.make_requests_from_url
答案 2 :(得分:0)
这是我在不设置项目类别的情况下为项目完成的方法:
像下面这样从命令行传递参数:
stackoverflow.com/questions/50911406/toll-cost-vat-here-api
接收参数并在my_spider.py中设置为Spider args
scrapy crawl reviews_spider -a brand_name=apple
在def __init__(self, brand_name, *args, **kwargs):
self.brand_name = brand_name
super(ReviewsSpider, self).__init__(*args, **kwargs)
# i am reading start_urls from an external file depending on the passed argument
with open('make_urls.json') as f:
self.start_urls = json.loads(f.read())[self.brand_name]
中:
pipelines.py