回调函数从未使用Scrapy调用

时间:2016-01-13 21:43:26

标签: python callback scrapy scrapy-spider

我是Scrapy和python的新手。我花了几个小时试图调试并寻找有用的响应,但我仍然卡住了。我正在尝试从www.pro-football-reference.com中提取数据。这是我现在的代码

import scrapy

from nfl_predictor.items import NflPredictorItem

class NflSpider(scrapy.Spider):
   name = "nfl2"
   allowed_domains = ["http://www.pro-football-reference.com/"]
   start_url = [
    "http://www.pro-football-reference.com/boxscores/201509100nwe.htm"
   ]

    def parse(self, response):
        print "parse"
        for href in response.xpath('// [@id="page_content"]/div[1]/table/tr/td/a/@href'):
        url = response.urljoin(href.extract())
        yield scrapy.Request(url, callback=self.parse_game_content)

    def parse_game_content(self, response):
        print "parse_game_content"
        items = []
        for sel in response.xpath('//table[@id = "team_stats"]/tr'):
            item = NflPredictorItem()
            item['away_stats'] = sel.xpath('td[@align = "center"][1]/text()').extract()
            item['home_stats'] = sel.xpath('td[@align = "center"][2]/text()').extract()
        items.append(item)
    return items

我使用parse命令进行调试并使用此命令

scrapy parse --spider=nfl2 "http://www.pro-football-reference.com/boxscores/201509100nwe.htm"

我得到以下输出

>>> STATUS DEPTH LEVEL 1 <<<
# Scraped Items  ------------------------------------------------------------
[]

# Requests  -----------------------------------------------------------------
[<GET http://www.pro-football-reference.com/years/2015/games.htm>,
 <GET http://www.nfl.com/scores/2015/REG1>,
 <GET http://www.pro-football-reference.com/boxscores/201509130buf.htm>,
 <GET http://www.pro-football-reference.com/boxscores/201509130chi.htm>,
 <GET http://www.pro-football-reference.com/boxscores/201509130crd.htm>,
 <GET http://www.pro-football-reference.com/boxscores/201509130dal.htm>,
 <GET http://www.pro-football-reference.com/boxscores/201509130den.htm>,
 <GET http://www.pro-football-reference.com/boxscores/201509130htx.htm>,
 <GET http://www.pro-football-reference.com/boxscores/201509130jax.htm>,
 <GET http://www.pro-football-reference.com/boxscores/201509130nyj.htm>,
 <GET http://www.pro-football-reference.com/boxscores/201509130rai.htm>,
 <GET http://www.pro-football-reference.com/boxscores/201509130ram.htm>,
 <GET http://www.pro-football-reference.com/boxscores/201509130sdg.htm>,
 <GET http://www.pro-football-reference.com/boxscores/201509130tam.htm>,
 <GET http://www.pro-football-reference.com/boxscores/201509130was.htm>,
 <GET http://www.pro-football-reference.com/boxscores/201509140atl.htm>,
 <GET http://www.pro-football-reference.com/boxscores/201509140sfo.htm>]

为什么它记录了我想要的链接请求,但是它永远不会进入parse_game_content函数来实际抓取数据?我还测试了parse_game_content函数作为解析函数,以确保它正在抓取正确的数据,并且在这种情况下它可以正常工作。

感谢您的帮助!

1 个答案:

答案 0 :(得分:0)

默认情况下,parse命令获取给定的URL并使用处理它的蜘蛛解析它,使用--callback选项传递的方法,或者如果没有给出则解析。在你的情况下,它只解析parse函数。更改命令以提供--callback,如:

scrapy parse --spider=nfl2 "http://www.pro-football-reference.com/boxscores/201509100nwe.htm" --callback=parse_game_content

而且,最好更改parse_game_content函数,如下所示

&#13;
&#13;
    def parse_game_content(self, response):
        print "parse_game_content"
        for sel in response.xpath('//table[@id="team_stats"]/tr'):
            item = NflPredictorItem()
            item['away_stats'] = sel.xpath('td[@align = "center"][1]/text()').extract()
            item['home_stats'] = sel.xpath('td[@align = "center"][2]/text()').extract()
            yield item
&#13;
&#13;
&#13;