我使用硒加刮板创建了一个蜘蛛,这表明它使用相同的脚本从现在开始直到昨天都刮板了,我能够将输出写入到csv文件中,但是现在下午它表明刮板不是公认的命令以及python和pip
因此,我从头开始安装了所有内容,包括python,而当我尝试在之后运行Spider时,Spider可以平稳运行,但不像以前那样以首选方式编写。
从现在开始的4个小时里,我一直在脑子里抽筋,但是想不出一种方法,如果有人能帮助我,那么按照您的要求,我将不胜感激
我尝试过多次更改管道
settings.py
BOT_NAME = 'mcmastersds'
SPIDER_MODULES = ['grainger.spiders']
NEWSPIDER_MODULE = 'grainger.spiders'
LOG_LEVEL = 'INFO'
ROBOTSTXT_OBEY = False
ITEM_PIPELINES = {'grainger.pipelines.GraingerPipeline': 300,}
DOWNLOAD_DELAY = 1
USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.76 Safari/537.36 OPR/43.0.2442.806'
PROXY_MODE = 0
RETRY_TIMES = 0
SPLASH_URL = 'http://localhost:8050'
SPIDER_MIDDLEWARES = {
'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}
pipelines.py
import csv
import os.path
from scrapy.loader import ItemLoader
from scrapy.loader.processors import TakeFirst, MapCompose, Join
class GraingerPipeline(object):
def __init__(self):
if not os.path.isfile('CONTENT_psysci.csv'):
self.csvwriter = csv.writer(open('safale.csv', 'a',newline="",encoding='utf8'))
self.csvwriter.writerow(['url','Title','sellername','travlink','travlink1','rating','Crreview','feature','Description','proddescription','Additonalinfo','details','detailsextended','producttable','stockstatus','newseller','condition','deliverystatus','price','bestsellersrank','mainimage','subimage'])
def process_item(self, item, spider):
self.csvwriter.writerow([item['url'],item['title'],item['sellername'],item['travlink'],item['travlink1'],item['rating'],item['Crreview'],item['feature'],item['Description'],item['proddescription'],item['Additonalinfo'],item['details'],item['detailsextended'],item['producttable'],item['stockstatus'],item['newseller'],item['condition'],item['deliverystatus'],item['price'],item['bestsellersrank'],item['mainimage'],item['subimage']])
return item
你能帮我吗
答案 0 :(得分:2)
如果您只想写项目而不做特定于数据的事情,我建议您使用feed exports功能。 Scrapy提供了内置的CSV feed exporter。
您的代码无法正常运行的原因是,您从未关闭在self.csvwriter
初始化语句中打开的csv文件。
您应该使用open_spider
和close_spider
方法打开文件,然后在处理完项目后关闭文件,请看一下scrapy文档中的json pipeline example。
因此,您上面的管道应适用于以下代码:
class GraingerPipeline(object):
csv_file = None
def open_spider(self):
if not os.path.isfile('CONTENT_psysci.csv'):
self.csvfile = open('safale.csv', 'a',newline="",encoding='utf8')
self.csvwriter = csv.writer(self.csvfile)
self.csvwriter.writerow(['url','Title','sellername','travlink','travlink1','rating','Crreview','feature','Description','proddescription','Additonalinfo','details','detailsextended','producttable','stockstatus','newseller','condition','deliverystatus','price','bestsellersrank','mainimage','subimage'])
def process_item(self, item, spider):
self.csvwriter.writerow([item['url'],item['title'],item['sellername'],item['travlink'],item['travlink1'],item['rating'],item['Crreview'],item['feature'],item['Description'],item['proddescription'],item['Additonalinfo'],item['details'],item['detailsextended'],item['producttable'],item['stockstatus'],item['newseller'],item['condition'],item['deliverystatus'],item['price'],item['bestsellersrank'],item['mainimage'],item['subimage']])
return item
def close_spider(self):
if self.csv_file:
self.csv_file.close()