Scrapy无法导出正文

时间:2019-10-28 22:57:38

标签: python scrapy

因此,我正在尝试一个名为RISJbot的Scrapy项目,以提取新闻文章的内容进行研究,但是我遇到了一个找不到源头或方法的问题。修复它:蜘蛛实际上并没有返回正文(在《华盛顿邮报》上找不到,在CNN上很少),这对于文章来说非常重要。

我对Python不太熟悉,但是据我了解,它会尝试几次查找正文,但是如果找不到,就会返回经过gzip压缩和Base 64编码的版本。

您会建议我做什么?到目前为止,我找不到解决方法。

这是蜘蛛本身:

# -*- coding: utf-8 -*-
from RISJbot.spiders.newssitemapspider import NewsSitemapSpider
from RISJbot.loaders import NewsLoader
# Note: mutate_selector_del_xpath is somewhat naughty. Read its docstring.
from RISJbot.utils import mutate_selector_del_xpath
from scrapy.loader.processors import Identity, TakeFirst
from scrapy.loader.processors import Join, Compose, MapCompose
import re

class WashingtonPostSpider(NewsSitemapSpider):
    name = 'washingtonpost'
    # allowed_domains = ['washingtonpost.com']
    # A list of XML sitemap files, or suitable robots.txt files with pointers.
    sitemap_urls = ['https://www.washingtonpost.com/news-sitemaps/index.xml']

    def parse_page(self, response):
        """@url http://www.washingtonpost.com/business/2019/10/25/us-deficit-hit-billion-marking-nearly-percent-increase-during-trump-era/?hpid=hp_hp-top-table-main_deficit-210pm%3Ahomepage%2Fstory-ans
        @returns items 1
        @scrapes bodytext bylines fetchtime firstpubtime headline source url 
        @noscrapes modtime
        """
        s = response.selector
        # Remove any content from the tree before passing it to the loader.
        # There aren't native scrapy loader/selector methods for this.        
        #mutate_selector_del_xpath(s, '//*[@style="display:none"]')

        l = NewsLoader(selector=s)

        # WaPo's ISO date/time strings are invalid: <datetime>-500 instead of
        # <datetime>-05:00. Note that the various standardised l.add_* methods
        # will generate 'Failed to parse data' log items. We've got it properly
        # here, so they aren't important.
        l.add_xpath('firstpubtime',
                    '//*[@itemprop="datePublished" or '
                        '@property="datePublished"]/@content',
                    MapCompose(self.fix_iso_date)) # CreativeWork

        # These are duplicated in the markup, so uniquise them.
        l.add_xpath('bylines',
                    '//div[@itemprop="author-names"]/span/text()',
                    set)
        l.add_xpath('section',
                    '//*[contains(@class, "headline-kicker")]//text()')


        # Add a number of items of data that should be standardised across
        # providers. Can override these (for TakeFirst() fields) by making
        # l.add_* calls above this line, or supplement gaps by making them
        # below.
        l.add_fromresponse(response)
        l.add_htmlmeta()
        l.add_schemaorg(response)
        l.add_opengraph()
        l.add_scrapymeta(response)

        return l.load_item()

    def fix_iso_date(self, s):
        return re.sub(r'^([0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}[+-])'
                            '([0-9])([0-9]{2})$',
                      r'\g<1>0\g<2>:\g<3>',
                      s)

完整的错误消息是(它没有“跟踪”部分):

  

错误:RISJbot.pipelines.checkcontent:没有正文:https://www.washingtonpost.com/world/europe/russia-and-cuba-rebuild-ties-that-frayed-after-cold-war/2019/10/29/d046cc0a-fa09-11e9-9e02-1d45cb3dfa8f_story.html

我也发现了另一个错误,尽管不确定它是否与正文问题有关:

  

错误:scrapy.utils.signal:信号处理程序捕获错误:>   追溯(最近一次通话):     文件“ C:\ Users \ sigalizer \ Anaconda3 \ envs \ scrapyenv \ lib \ site-packages \ twisted \ internet \ defer.py”,第151行,也许在Deferred中       结果= f(* args,** kw)     文件“ C:\ Users \ sigalizer \ Anaconda3 \ envs \ scrapyenv \ lib \ site-packages \ pydispatch \ robustapply.py”,第55行,在robustApply中       返回接收者(*参数,**命名)     文件“ C:\ Users \ sigalizer \ Anaconda3 \ envs \ scrapyenv \ lib \ site-packages \ scrapy \ extensions \ feedexport.py”,第243行,位于item_scraped中       插槽= self.slot   AttributeError:“ FeedExporter”对象没有属性“ slot”

0 个答案:

没有答案