Scrapy / BigQuery在关闭Spider时失败,并发送以下错误:OSError:[Errno 5]输入/输出错误

时间:2019-06-12 08:15:06

标签: python error-handling scrapy google-bigquery

我启动了CrawlSpider来从在线购物网页中检索类别。大约有76万个商品。 11小时后,我看了看原木,才意识到蜘蛛已经关闭了。从管道调用close_spider()函数时失败。基本上,我自己的close_spider()函数实现在Spider和bigquery之间建立连接,并将本地保存的jsonlines文件传输到bigquery数据库。但是,正如我提到的,它在此步骤中失败。

我手动尝试了close_spider()函数,它成功地将相同的已保存jsonlines文件传输到bigquery。顺便说一下,jsonlines文件中大约有466k行。我也尝试过在具有8k项的不同类别上使用同一蜘蛛,并将其成功地将Feed文件传输到bigquery,并且未收到任何错误消息。我两次遇到此错误。当我第一次收到此错误消息时,蜘蛛抓取了70万个物品。

这是日志文件:

2019-06-11 23:18:12 [scrapy.extensions.logstats] INFO: Crawled 480107 pages (at 787 pages/min), scraped 466560 items (at 772 items/min)
2019-06-11 23:18:33 [scrapy.core.engine] INFO: Closing spider (finished)
2019-06-11 23:18:33 [scrapy.core.engine] ERROR: Scraper close failure
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/twisted/internet/defer.py", line 654, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "/home/togayyazar/etsy/etsy/pipelines.py", line 20, in close_spider
    self.write_to_bq()
  File "/home/togayyazar/etsy/etsy/pipelines.py", line 30, in write_to_bq
    print("-----BIGQUERY-----")
OSError: [Errno 5] Input/output error
2019-06-11 23:18:33 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 217195256,
 'downloader/request_count': 480652,
 'downloader/request_method_count/GET': 480652,
 'downloader/response_bytes': 29983627714,
 'downloader/response_count': 480652,
 'downloader/response_status_count/200': 480373,
 'downloader/response_status_count/301': 254,
 'downloader/response_status_count/400': 6,
 'downloader/response_status_count/503': 19,
 'dupefilter/filtered': 358230,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2019, 6, 11, 23, 18, 33, 739888),
 'httperror/response_ignored_count': 6,
 'httperror/response_ignored_status_count/400': 6,
 'item_scraped_count': 466833,
 'log_count/ERROR': 1,
 'log_count/INFO': 663,
 'memusage/max': 456044544,
 'memusage/startup': 61976576,
 'request_depth_max': 88,
 'response_received_count': 480379,
 'retry/count': 19,
 'retry/reason_count/503 Service Unavailable': 19,
 'scheduler/dequeued': 480652,
 'scheduler/dequeued/memory': 480652,
 'scheduler/enqueued': 480652,
 'scheduler/enqueued/memory': 480652,
 'start_time': datetime.datetime(2019, 6, 11, 12, 30, 12, 400853)}
2019-06-11 23:18:33 [scrapy.core.engine] INFO: Spider closed (finished)

和close_spider()函数:

def close_spider(self, spider):
    self.file.close()
    self.write_to_bq()

def write_to_bq(self):
    print("-----BIGQUERY-----")
    bq=BigQuery()
    dataset_name=self.category

    if not bq.dataset_exists(dataset_name):
        bq.create_dataset(dataset_name) 

    path="/home/togayyazar/etsy/"+self.file_path
    table_name=self.date_time
    bq.load_table(
        path,
        table_name,
        dataset_name,
        'NEWLINE_DELIMITED_JSON',
    )

任何帮助将不胜感激。

1 个答案:

答案 0 :(得分:0)

如果您查看错误跟踪,则会发现print()函数中出现异常。

File "/home/togayyazar/etsy/etsy/pipelines.py", line 30, in write_to_bq
    print("-----BIGQUERY-----") OSError: [Errno 5] Input/output error

选中this thread以了解问题所在。

我建议您简单地删除print或将其替换为logging模块,如果您想使用蜘蛛网,则它的属性为logger,但是如果您想使用使用管道名称的记录器,您可以执行以下操作:

import logging

class YourPipeline(object):

    def __init__(self):
        # Create a logger with the pipeline name
        self.logger = logging.getLogger(self.__class__.__name__) 

    def close_spider(self, spider):
        self.file.close()
        self.write_to_bq()

    def write_to_bq(self):
        self.logger.debug("-----BIGQUERY-----")
        # rest of you code