我无法弄清楚为什么scrapy不会将比特币价格插入mongodb

时间:2018-01-12 04:26:05

标签: python mongodb scrapy

我是python scrapy的新手。我已经完成了一些教程,并且我已经能够将数据存储到mongodb,但是它没有用于自己的简单项目,即获取api并将比特币价格放入mongodatabase中。我的scrapy项目如下:

bitscrape/spiders/__init__.py
# This package will contain the spiders of your Scrapy project
#
# Please refer to the documentation for information on how to create and manage
# your spiders.
import requests
from bs4 import BeautifulSoup
import json
import scrapy
from bitscrape.items import BitscrapeItem

class BitcoinSpider(scrapy.Spider):
    name = 'bitcoin_spider'
    allowed_domains = ['coindesk.com']

    start_url = ["https://api.coindesk.com/v1/bpi/currentprice.json"]
    page = requests.get("https://api.coindesk.com/v1/bpi/currentprice.json")

    def parse(self, response):
        item = BitscrapeItem()
        q = page.json()
        item["time_posted"] = q['time']['updated']
        item["price_used"] = q['bpi']['USD']['rate']
        yield item

以下是我的items.py

bitscrape/items.py
import scrapy


class BitscrapeItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    time_posted = scrapy.Field()
    price_used = scrapy.Field()
    pass

下面是我的中间件.py#我没有改变这个

from scrapy import signals


class BitscrapeSpiderMiddleware(object):
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the spider middleware does not modify the
    # passed objects.

    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s

    def process_spider_input(self, response, spider):
        # Called for each response that goes through the spider
        # middleware and into the spider.

        # Should return None or raise an exception.
        return None

    def process_spider_output(self, response, result, spider):
        # Called with the results returned from the Spider, after
        # it has processed the response.

        # Must return an iterable of Request, dict or Item objects.
        for i in result:
            yield i

    def process_spider_exception(self, response, exception, spider):
        # Called when a spider or process_spider_input() method
        # (from other spider middleware) raises an exception.

        # Should return either None or an iterable of Response, dict
        # or Item objects.
        pass

    def process_start_requests(self, start_requests, spider):
        # Called with the start requests of the spider, and works
        # similarly to the process_spider_output() method, except
        # that it doesn’t have a response associated.

        # Must return only requests (not items).
        for r in start_requests:
            yield r

    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)

以下是我的settings.py #these是我唯一做出的改变

    DOWNLOAD_DELAY = .25
    RANDOMIZE_DOWNLOAD_DELAY = True

    # ...

    # Configure item pipelines
    # See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
    ITEM_PIPELINES = {
        'bitscrape.pipelines.MongoPipeline': 300,
    }
    MONGO_URI = 'mongodb://localhost:27017'
    MONGO_DATABASE = 'z-bitscrape'

以下是我的管道类

import logging
import pymongo

class MongoPipeline(object):

    collection_name = 'bitcoin_prices'

    def __init__(self, mongo_uri, mongo_db):
        self.mongo_uri = mongo_uri
        self.mongo_db = mongo_db

    @classmethod
    def from_crawler(cls, crawler):
        ## pull in information from settings.py
        return cls(
            mongo_uri=crawler.settings.get('MONGO_URI'),
            mongo_db=crawler.settings.get('MONGO_DATABASE')
        )

    def open_spider(self, spider):
        ## initializing spider
        ## opening db connection
        self.client = pymongo.MongoClient(self.mongo_uri)
        self.db = self.client[self.mongo_db]

    def close_spider(self, spider):
        ## clean up when spider is closed
        self.client.close()

    def process_item(self, item, spider):
        ## how to handle each post
        self.db[self.collection_name].insert(dict(item))
        logging.debug("Post added to MongoDB")
        return item

这是我的mongod终端的打印输出:

2018-01-12T10:27:29.794-0600 I NETWORK  [listener] connection accepted from 127.0.0.1:50138 #2 (2 connections now open)
2018-01-12T10:27:30.159-0600 I NETWORK  [conn2] end connection 127.0.0.1:50138 (1 connection now open)

当我运行scrapy crawal时,我没有显示正在创建的新数据库,所以很明显“bitcoin_prices”'收藏品没有显示,因为' z-bitscrape'数据库尚未创建。

最后,这是我的cmd提示窗口的打印输出:

(mynews_grabber) ..\PycharmProjects\mynews_bit\bitscrape>scrapy crawl bitcoin_spider
2018-01-12 10:27:29 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: bitscrape)
2018-01-12 10:27:29 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'bitscrape', 'DOWNLOAD_DELAY': 0.25, 'NEWSPIDER_MODULE': 'bitscrape.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': [
'bitscrape.spiders']}
2018-01-12 10:27:29 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2018-01-12 10:27:29 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-01-12 10:27:29 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-01-12 10:27:29 [scrapy.middleware] INFO: Enabled item pipelines:
['bitscrape.pipelines.MongoPipeline']
2018-01-12 10:27:29 [scrapy.core.engine] INFO: Spider opened
2018-01-12 10:27:29 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-01-12 10:27:29 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-01-12 10:27:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://api.coindesk.com/robots.txt> (referer: None)
2018-01-12 10:27:29 [scrapy.downloadermiddlewares.robotstxt] DEBUG: Forbidden by robots.txt: <GET https://api.coindesk.com/v1/bpi/currentprice.json>
2018-01-12 10:27:30 [scrapy.core.engine] INFO: Closing spider (finished)
2018-01-12 10:27:30 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 1,
 'downloader/exception_type_count/scrapy.exceptions.IgnoreRequest': 1,
 'downloader/request_bytes': 224,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 580,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2018, 1, 12, 16, 27, 30, 159735),
 'log_count/DEBUG': 3,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2018, 1, 12, 16, 27, 29, 793714)}
2018-01-12 10:27:30 [scrapy.core.engine] INFO: Spider closed (finished)

非常感谢您的帮助!

1 个答案:

答案 0 :(得分:0)

根据Scrapy运行提供的日志,您被robots.txt阻止:

...
2018-01-12 10:27:29 [scrapy.downloadermiddlewares.robotstxt] DEBUG: Forbidden by robots.txt: <GET https://api.coindesk.com/v1/bpi/currentprice.json>
...

因此,Scrapy在达到parse方法之前停止并产生任何项目(因此,没有任何东西可以传递给管道)。您可以尝试通过在ROBOTSTXT_OBEY=Falsesettings.py蜘蛛属性中设置custom_settings来解决此问题。