Scrapinghub将我的结果插入日志中,而不插入项目

时间:2019-02-28 09:41:36

标签: json scrapy scrapy-spider scrapy-pipeline scrapinghub

我有一个运行正常的Spider项目,用于提取url内容(没有CSS)。我抓取了几组数据,并将它们存储在一系列.csv文件中。现在,我尝试将其设置为可在Scrapinghub上使用,以便进行长期刮擦。 到目前为止,我已经能够上传蜘蛛并在Scrapinghub上工作。我的问题是结果出现在“日志”中而不是在“项目”下。数据量超过了日志容量,因此给我一个错误。 如何设置管道/提取器正常工作并返回js或csv文件?我对将已抓取的数据发送到数据库的解决方案感到满意。由于我也未能实现这一目标。 任何指导表示赞赏。

蜘蛛:

class DataSpider(scrapy.Spider):
name = "Data_2018"

def url_values(self):
    time = list(range(1538140980, 1538140820, -60))
    return time

def start_requests(self):
    allowed_domains = ["https://website.net"]
    list_urls = []
    for n in self.url_values():
        list_urls.append("https://website.net/.../.../.../all/{}".format(n))

    for url in list_urls:
        yield scrapy.Request(url=url, callback=self.parse, dont_filter=True)

def parse(self, response):
    data = response.body
    items = positionsItem()
    items['file'] = data
    yield items

管道

class positionsPipeline(object):

def process_item(self, item, spider):
    return item

设置

BOT_NAME = 'Positions'
SPIDER_MODULES = ['Positions.spiders']
NEWSPIDER_MODULE = 'Positions.spiders'
USER_AGENT = get_random_agent()
ROBOTSTXT_OBEY = True
CONCURRENT_REQUESTS = 32
DOWNLOAD_DELAY = 10
SPIDER_MIDDLEWARES = {
'Positions.middlewares.positionsSpiderMiddleware': 543,
    }
DOWNLOADER_MIDDLEWARES = {
   'Positions.middlewares.positionsDownloaderMiddleware':       543,
  }
ITEM_PIPELINES = {
   'Positions.pipelines.positionsPipeline': 300,
}
HTTPCACHE_ENABLED = True
HTTPCACHE_EXPIRATION_SECS = 0
HTTPCACHE_DIR = 'httpcache'
HTTPCACHE_IGNORE_HTTP_CODES = []
HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

项目


class positionsItem(scrapy.Item):
file = scrapy.Field()

Scrapinghub日志显示:

13: 2019-02-28 07:46:13 ERROR   Rejected message because it was too big: ITM {"_type":"AircraftpositionsItem","file":"{\"success\":true,\"payload\":{\"aircraft\":{\"0\":{\"000001\":[null,null,\"CFFAW\",9.95729,-84.1405,9500,90,136,1538140969,null,null,\"2000\",\"2-39710687\",[9.93233,-84.1386,277]],\"000023\":[\"ULAC\",null,\"PH4P4\",

1 个答案:

答案 0 :(得分:0)

在您的设置文件中,似乎没有预定义的供稿输出机制供Scrapy使用。奇怪的是它第一次在本地工作(生成.csv文件)。

无论如何,这是settings.py中需要添加的其他行,Scrapy才能正常工作。如果只想将输出本地馈送到.csv文件,则:

# Local .csv version
FEED_URI = 'file://NAME_OF_FILE_PATH.csv'
FEED_FORMAT = 'csv'

我也使用此版本将json文件上传到S3存储桶

# Remote S3 .json version
AWS_ACCESS_KEY_ID = YOUR_AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY = YOUR_AWS_SECRET_ACCESS_KEY

FEED_URI = 's3://BUCKET_NAME/NAME_OF_FILE_PATH.json'
FEED_FORMAT = 'json'