如何停止履带

时间:2019-06-24 13:53:46

标签: web-scraping scrapy

我正在尝试编写一个爬网程序,该爬虫程序可以访问网站并搜索关键字列表,且max_Depth为2。但是一旦任何关键字出现在任何页面上,抓取程序都应该停止,这是我面临的问题现在的情况是,搜寻器在第一次看到任何关键字时都不会停止。

即使在尝试之后,也要提早返回命令,break命令,CloseSpider命令甚至python退出命令。

我的履带课程:

class WebsiteSpider(CrawlSpider):

name = "webcrawler"

allowed_domains = ["www.roomtoread.org"]
start_urls = ["https://"+"www.roomtoread.org"]
rules = [Rule(LinkExtractor(), follow=True, callback="check_buzzwords")]

crawl_count = 0
words_found = 0                                 

def check_buzzwords(self, response):

    self.__class__.crawl_count += 1

    crawl_count = self.__class__.crawl_count

    wordlist = [
        "sfdc",
        "pardot",
        "Web-to-Lead",
        "salesforce"
        ]

    url = response.url
    contenttype = response.headers.get("content-type", "").decode('utf-8').lower()
    data = response.body.decode('utf-8')

    for word in wordlist:
            substrings = find_all_substrings(data, word)
            for pos in substrings:
                    ok = False
                    if not ok:
                        if  self.__class__.words_found==0:
                            self.__class__.words_found += 1
                            print(word + "," + url + ";")
                            STOP!




    return Item()

def _requests_to_follow(self, response):
    if getattr(response, "encoding", None) != None:
            return CrawlSpider._requests_to_follow(self, response)
    else:
            return []

我希望它在if not ok:True时停止执行。

1 个答案:

答案 0 :(得分:1)

当我想停止蜘蛛时,通常会使用Scrapy-Docs中的异常exception scrapy.exceptions.CloseSpider(reason='cancelled')

那里的示例显示了您可以使用它的方式:

if 'Bandwidth exceeded' in response.body:
    raise CloseSpider('bandwidth_exceeded')

在您的情况下,类似

if not ok:
    raise CloseSpider('keyword_found')

或者这就是你的意思

  

CloseSpider命令

已经尝试过了吗?