您好我正在使用scrapy来刮擦网站
我写过蜘蛛,获取所有信息并通过pipeline.py
保存到csv文件中pipeline.py代码
class Examplepipeline(object):
def __init__(self):
dispatcher.connect(self.spider_opened, signal=signals.spider_opened)
dispatcher.connect(self.spider_closed, signal=signals.spider_closed)
def spider_opened(self, spider):
log.msg("opened spider %s at time %s" % (spider.name,datetime.now().strftime('%H-%M-%S')))
self.exampledotcomCsv = csv.writer(open("csv's/%s(%s).csv"% (spider.name,datetime.now().strftime("%d/%m/%Y,%H-%M-%S")), "wb"),
delimiter=',', quoting=csv.QUOTE_MINIMAL)
self.exampledotcomCsv.writerow(['field1', 'field2','field3','field4'])
def process_item(self, item, spider):
log.msg("Processsing item " + item['title'], level=log.DEBUG)
self.brandCategoryCsv.writerow([item['field1'].encode('utf-8'),
[i.encode('utf-8') for i in item['field2']],
item['field3'].encode('utf-8'),
[i.encode('utf-8') for i in item['field4']]
])
return item
def spider_closed(self, spider):
log.msg("closed spider %s at %s" % (spider.name,datetime.now().strftime('%H-%M-%S')))
在上面的代码中,我可以获得start time and end time of spider
,但在关闭蜘蛛后,我想计算并显示蜘蛛占用的total time
difference between start time and end time
那么我该怎么做呢,我们可以在spider_closed方法中编写这个功能吗?
请让我知道这一点。
答案 0 :(得分:1)
为什么不:
def spider_opened(self, spider):
spider.started_on = datetime.now()
...
def spider_closed(self, spider):
work_time = datetime.now() - spider.started_on
...