我试图通过Pymongo用一个杂乱的管道连接到MongoDB,以便创建一个新的数据库,并用我刚刚抓到的东西填充它,但我遇到了一个奇怪的问题。我按照基本教程设置了2个命令行,一个用于运行scrapy,另一个用于运行mongod。不幸的是,当我在运行mongod之后运行scrapy代码时,mongod似乎没有接受我想要设置的scrapy管道,只是保持“等待端口27107上的连接”通知。
在命令行1(scrapy)中我将目录设置为Documents / PyProjects / twitterBot / krugman
在命令行2(mongod)中我将其设置为Documents / PyProjects / twitterBot
我使用的脚本如下: krugman / krugman / spiders / krugSpider.py(拉保罗克鲁格曼博客条目):
from scrapy import http
from scrapy.selector import Selector
from scrapy.spiders import CrawlSpider
import scrapy
import pymongo
import json
from krugman.items import BlogPost
class krugSpider(CrawlSpider):
name = 'krugbot'
start_url = ['https://krugman.blogs.nytimes.com']
def __init__(self):
self.url = 'https://krugman.blogs.nytimes.com/more_posts_jsons/page/{0}/?homepage=1&apagenum={0}'
def start_requests(self):
yield http.Request(self.url.format('1'), callback = self.parse_page)
def parse_page(self, response):
data = json.loads(response.body)
for block in range(len(data['posts'])):
for article in self.parse_block(data['posts'][block]):
yield article
page = data['args']['paged'] + 1
url = self.url.format(str(page))
yield http.Request(url, callback = self.parse_page)
def parse_block(self, content):
article = BlogPost(author = 'Paul Krugman', source = 'Blog')
paragraphs = Selector(text = str(content['html']))
article['paragraphs']= paragraphs.css('p.story-body-text::text').extract()
article['links'] = paragraphs.css('p.story-body-text a::attr(href)').extract()
article['datetime'] = content['post_date']
article['post_id'] = content['post_id']
article['url'] = content['permalink']
article['title'] = content['headline']
yield article
克鲁格曼/克鲁格曼/ settings.py:
ITEM_PIPELINES = ['krugman.pipelines.KrugmanPipeline']
MONGODB_SERVER = 'localhost'
MONGODB_PORT = 27017
MONGODB_DB = 'ScrapeDB'
MONGODB_TWEETS = 'tweetCol'
MONGODB_FACEBOOK = 'fbCol'
MONGODB_BLOG = 'blogCol'
克鲁格曼/克鲁格曼/ pipelines.py
from pymongo import MongoClient
from scrapy.conf import settings
from scrapy import log
class KrugmanPipeline(object):
def __init(self):
connection = MongoClient(settings['MONGODB_SERVER'], settings['MONGODB_PORT'])
db = connection[settings['MONGODB_DB']]
self.collection = db[settings['MONGODB_BLOG']]
def process_item(self, item, spider):
self.collection.insert_one(dict(item))
log.msg("Test this out")
return item
我没有收到任何错误消息,因此我无法排除故障。它似乎只是拒绝开火。关于我的问题可能是什么想法?
答案 0 :(得分:0)
在您的设置中,您没有添加MongoPipeline。
ITEM_PIPELINES = {
'crawler.pipelines.MongoPipeline': 800,
'scrapy.pipelines.images.ImagesPipeline': 300,
}