Scrapy项目提取范围问题

时间:2015-01-08 03:18:27

标签: python scope scrapy pipeline

我遇到了在我的管道中返回Scrapy项目(玩家)的范围问题。我很确定我知道问题是什么,但我不确定如何将解决方案集成到我的代码中。我也确信我现在已经为管道正确编写了代码来处理。我只是在parseRoster()函数中声明了player项,所以我知道它的范围仅限于该函数。

现在我的问题是,我需要在我的代码中声明一个玩家项目,以便我的管道可以看到它?我的目标是将这些数据存入我的数据库。我将假设它将在我的代码的主循环中,如果是这种情况,我怎么能返回项目和&我新宣布的玩家项目?

我的代码如下:

类NbastatsSpider(scrapy.Spider):     name =" nbaStats"

start_urls = [
    "http://espn.go.com/nba/teams"                                                                              ##only start not allowed because had some issues when navigated to team roster pages
    ]
def parse(self,response):
    items = []                                                                                                  ##array or list that stores TeamStats item
    i=0                                                                                                         ##counter needed for older code

    for division in response.xpath('//div[@id="content"]//div[contains(@class, "mod-teams-list-medium")]'):     
        for team in division.xpath('.//div[contains(@class, "mod-content")]//li'):
            item = TeamStats()


            item['division'] = division.xpath('.//div[contains(@class, "mod-header")]/h4/text()').extract()[0]            
            item['team'] = team.xpath('.//h5/a/text()').extract()[0]
            item['rosterurl'] = "http://espn.go.com" + team.xpath('.//div/span[2]/a[3]/@href').extract()[0]
            items.append(item)
            request = scrapy.Request(item['rosterurl'], callback = self.parseWPNow)
            request.meta['play'] = item

            yield request

    print(item)      

def parseWPNow(self, response):
    item = response.meta['play']
    item = self.parseRoster(item, response)

    return item

def parseRoster(self, item, response):
    players = Player()
    int = 0
    for player in response.xpath("//td[@class='sortcell']"):
        players['name'] = player.xpath("a/text()").extract()[0]
        players['position'] = player.xpath("following-sibling::td[1]").extract()[0]
        players['age'] = player.xpath("following-sibling::td[2]").extract()[0]
        players['height'] = player.xpath("following-sibling::td[3]").extract()[0]
        players['weight'] = player.xpath("following-sibling::td[4]").extract()[0]
        players['college'] = player.xpath("following-sibling::td[5]").extract()[0]
        players['salary'] = player.xpath("following-sibling::td[6]").extract()[0]
        yield players
    item['playerurl'] = response.xpath("//td[@class='sortcell']/a").extract()
    yield item

1 个答案:

答案 0 :(得分:3)

根据Scrapy's data flow的相关部分:

  

引擎将已删除的项目(由Spider返回)发送到项目   管道和请求(由spider返回)到Scheduler

换句话说,从蜘蛛中返回/生成项目实例,然后在管道的process_item()方法中使用它们。由于您有多个项目类,请使用isinstance() built-in function

区分它们
def process_item(self, item, spider):
    if isinstance(item, TeamStats):
        # process team stats

    if isinstance(item, Player):
        # process player