避免从已经抓取的页面中抓取数据

时间:2015-04-01 17:19:30

标签: python scrapy

晚上好,

我仍在研究我的蜘蛛从新闻网站上搜集数据但又遇到了另一个问题,我的原始问题发布在这里:Scrapy outputs [ into my .json file但已经解决了。

我已经设法进一步,不得不考虑空项目和添加搜索功能我现在正试图刮掉我还没有抓过的文章,(记住我可能仍然想要提取他们的链接)。我无法弄清楚将把代码放在哪里:

a。)定义最后一次抓取的时间 b。)将文章的日期与上次抓取的日期进行比较。

我可能只是在挣扎于逻辑,所以我转向你。

我的蜘蛛:

# tabbing in python is apparently VERY important so be aware and make sure 
# things that should line up do so

# import the CrawlSpider Class, along with it's Rules, (this lets us recursively
# crawl pages)

from scrapy.contrib.spiders import CrawlSpider, Rule

#import the link extractor, this extracts links from pages

from scrapy.contrib.linkextractors import LinkExtractor

# import our items as defined in items.py

from basic.items import BasicItem

# import datetime so that we can get the current date and time

import time

# import re which allows us to compare strings

import re

# create a new Spider with the CrawlSpider Class

class BasicSpiderSpider(CrawlSpider):

    # Name of the spider, this is used to run it, (i.e Scrapy Crawl basic_spider)

    name = "basic_spider"

    # domains that the spider is allowed to crawl over

    allowed_domains = ["news24.com"]

    # where to start crawling from

    start_urls = [
        'http://www.news24.com',
    ]

    # Rules for the link extractor, (i.e where it's allowed to look for links, 
    # what to do once it's found them, and whether it's allowed to follow them

    rules = (Rule (LinkExtractor(), callback="parse_items", follow= True),
    )

    # defining the callback function

    def parse_items(self, response):

        # defines the Top level XPath where all of our information can be found, needs to be
        # as specific as possible to avoid duplicates

        for title in response.xpath('//*[@id="aspnetForm"]'):

            # List of keywords to search through.

            key = re.compile("joburg|durban", re.IGNORECASE)

            # extracting the data to compare with the keywords, this is for the 
            # headlines, the join converts it from a list type to a string type

            headlist = title.xpath('//*[@id="article_special"]//h1/text()').extract()
            head = ''.join(headlist)

            # and this is for the article.

            artlist = title.xpath('//*[@id="article-body"]//text()').extract()
            art = ''.join(artlist)

            # if any keywords are found in the headline:

            if key.search(head):
                if last_crawled > response.xpath('//*[@id="spnDate"]/text()').extract()
                    # define the top level xpath again as python won't look outside 
                    # it's current fuction

                    for thing in response.xpath('//*[@id="aspnetForm"]'):

                        # fills the items defined in items.py with relevant data

                        item = BasicItem()
                        item['Headline'] = thing.xpath('//*[@id="article_special"]//h1/text()').extract()
                        item["Article"] = thing.xpath('//*[@id="article-body"]/p[1]/text()').extract()
                        item["Date"] = thing.xpath('//*[@id="spnDate"]/text()').extract()
                        item["Link"] = response.url

                        # I found that even with being careful about my XPaths I
                        # still got empty fields and lines, the below fixes that

                        if item['Headline']:
                            if item["Article"]:
                                if item["Date"]:
                                    last_crawled = (time.strftime("%Y-%m-%d %H:%M"))
                                    yield item

            # if the headline item doesn't match, check the article item.

            elif key.search(art):
                if last_crawled > response.xpath('//*[@id="spnDate"]/text()').extract()
                    for thing in response.xpath('//*[@id="aspnetForm"]'):
                        item = BasicItem()
                        item['Headline'] = thing.xpath('//*[@id="article_special"]//h1/text()').extract()
                        item["Article"] = thing.xpath('//*[@id="article-body"]/p[1]/text()').extract()
                        item["Date"] = thing.xpath('//*[@id="spnDate"]/text()').extract()
                        item["Link"] = response.url

                        if item['Headline']:
                            if item["Article"]:
                                if item["Date"]:
                                    last_crawled = (time.strftime("%Y-%m-%d %H:%M"))
                                    yield item

它没有用,但正如我所说,我对逻辑持怀疑态度,如果我在这里走上正轨,有人会告诉我吗?

再次感谢所有的帮助。

1 个答案:

答案 0 :(得分:2)

您似乎完全脱离了上下文使用last_crawled。但是不要为此烦恼,你会更好地使用deltafetch中间件,这是为了你想要做的事情而创建的:

  

这是一个蜘蛛中间件,用于忽略对包含的页面的请求   在同一个蜘蛛的先前爬行中看到的物品,从而产生了一个   " delta crawl"仅包含新项目。

要使用deltafetch,请先安装scrapylib

pip install scrapylib

之后,在settings.py中启用它:

SPIDER_MIDDLEWARES = {
    'scrapylib.deltafetch.DeltaFetch': 100,
}

DELTAFETCH_ENABLED = True