Scrapy:尝试从选择器列表中提取数据不正确

时间:2015-03-20 22:31:21

标签: python xpath web-scraping scrapy

我正在尝试从网站上抓取足球设备而我的蜘蛛不太正确,因为我要么为所有选择器重复相同的夹具,或homeTeamawayTeam变量是包含所有选择器的巨大数组分别是本垒打或侧翼。无论哪种方式,它都应该反映Home vs Away格式。

这是我目前的尝试:

class FixtureSpider(CrawlSpider):
    name = "fixturesSpider"
    allowed_domains = ["www.bbc.co.uk"]
    start_urls = [
        "http://www.bbc.co.uk/sport/football/premier-league/fixtures"
    ]

    def parse(self, response):
        for sel in response.xpath('//table[@class="table-stats"]/tbody/tr[@class="preview"]'):

        item = Fixture()
        item['kickoff'] =  str(sel.xpath("//table[@class='table-stats']/tbody/tr[@class='preview']/td[3]/text()").extract()[0].strip())
        item['homeTeam'] = str(sel.xpath("//table[@class='table-stats']/tbody/tr/td[2]/p/span/a/text()").extract()[0].strip())
        item['awayTeam'] = str(sel.xpath("//table[@class='table-stats']/tbody/tr/td[2]/p/span/a/text()").extract()[1].strip())
        yield item

重复返回以下信息:

2015-03-20 21:41:40+0000 [fixturesSpider] DEBUG: Scraped from <200 http://www.bbc.co.uk/sport/football/premier-league/fixtures>
{'awayTeam': 'West Brom', 'homeTeam': 'Man City', 'kickoff': '12:45'}
2015-03-20 21:41:40+0000 [fixturesSpider] DEBUG: Scraped from <200 http://www.bbc.co.uk/sport/football/premier-league/fixtures>
{'awayTeam': 'West Brom', 'homeTeam': 'Man City', 'kickoff': '12:45'}

有人能让我知道我哪里出错吗?

2 个答案:

答案 0 :(得分:2)

问题是你在循环中使用的XPath表达式是绝对的 - 它们从根元素开始,但应该相对于sel指向的当前行。换句话说,您需要在当前行上下文中搜索

修正版:

for sel in response.xpath('//table[@class="table-stats"]/tbody/tr[@class="preview"]'):
    item = Fixture()
    item['kickoff'] =  str(sel.xpath("td[3]/text()").extract()[0].strip())
    item['homeTeam'] = str(sel.xpath("td[2]/p/span/a/text()").extract()[0].strip())
    item['awayTeam'] = str(sel.xpath("td[2]/p/span/a/text()").extract()[1].strip())
    yield item

这是我得到的输出:

{'awayTeam': 'West Brom', 'homeTeam': 'Man City', 'kickoff': '12:45'}
{'awayTeam': 'Swansea', 'homeTeam': 'Aston Villa', 'kickoff': '15:00'}
{'awayTeam': 'Arsenal', 'homeTeam': 'Newcastle', 'kickoff': '15:00'}
...

如果您想获取匹配日期,则需要更改策略 - 使用h2类迭代日期(table-header元素)并获取以下第一个兄弟table元素:

for date in response.xpath('//h2[@class="table-header"]'):
    matches = date.xpath('.//following-sibling::table[@class="table-stats"][1]/tbody/tr[@class="preview"]')
    date = date.xpath('text()').extract()[0].strip()

    for match in matches:
        item = Fixture()
        item['date'] = date
        item['kickoff'] = match.xpath("td[3]/text()").extract()[0].strip()
        item['homeTeam'] = match.xpath("td[2]/p/span/a/text()").extract()[0].strip()
        item['awayTeam'] = match.xpath("td[2]/p/span/a/text()").extract()[1].strip()
        yield item

答案 1 :(得分:0)

尝试下面的选择器。我认为您需要...tbody//tr/...而不是...tbody/tr/...来获取所有表格行而不是第一行。

    item['kickoff'] =  str(sel.xpath("//table[@class='table-stats']/tbody//tr[@class='preview']/td[3]/text()").extract()[0].strip())
    item['homeTeam'] = str(sel.xpath("//table[@class='table-stats']/tbody//tr/td[2]/p/span/a/text()").extract()[0].strip())
    item['awayTeam'] = str(sel.xpath("//table[@class='table-stats']/tbody//tr/td[2]/p/span/a/text()").extract()[1].strip())