Scrapy:从源及其链接中提取数据

时间:2016-05-17 09:36:19

标签: python xpath scrapy scrapy-spider

编辑问题链接到原文:

Scrapy getting data from links within table

从链接https://www.tdcj.state.tx.us/death_row/dr_info/trottiewillielast.html

我正在尝试从主表中获取信息以及表中其他2个链接中的数据。我设法从一个拉出来,但问题是转到另一个链接并将数据附加到一行。

from urlparse import urljoin

import scrapy

from texasdeath.items import DeathItem

class DeathItem(Item):
    firstName = Field()
    lastName = Field()
    Age = Field()
    Date = Field()
    Race = Field()
    County = Field()
    Message = Field()
    Passage = Field()

class DeathSpider(scrapy.Spider):
    name = "death"
    allowed_domains = ["tdcj.state.tx.us"]
    start_urls = [
        "http://www.tdcj.state.tx.us/death_row/dr_executed_offenders.html"
    ]

    def parse(self, response):
        sites = response.xpath('//table/tbody/tr')
        for site in sites:
            item = DeathItem()

            item['firstName'] = site.xpath('td[5]/text()').extract()
            item['lastName'] = site.xpath('td[4]/text()').extract()
            item['Age'] = site.xpath('td[7]/text()').extract()
            item['Date'] = site.xpath('td[8]/text()').extract()
            item['Race'] = site.xpath('td[9]/text()').extract()
            item['County'] = site.xpath('td[10]/text()').extract()

            url = urljoin(response.url, site.xpath("td[3]/a/@href").extract_first())
            url2 = urljoin(response.url, site.xpath("td[2]/a/@href").extract_first())
            if url.endswith("html"):
                request = scrapy.Request(url, meta={"item": item,"url2" : url2}, callback=self.parse_details)
                yield request
            else:
                yield item
def parse_details(self, response):
    item = response.meta["item"]
    url2 = response.meta["url2"]
    item['Message'] = response.xpath("//p[contains(text(), 'Last Statement')]/following-sibling::p/text()").extract()
    request = scrapy.Request(url2, meta={"item": item}, callback=self.parse_details2)
    return request

def parse_details2(self, response):
    item = response.meta["item"]
    item['Passage'] = response.xpath("//p/text()").extract_first()
    return item

我理解我们如何将参数传递给请求和元。但仍不清楚流量,此时我不确定这是否可能。我看过几个例子,包括以下几个:

using scrapy extracting data inside links

How can i use multiple requests and pass items in between them in scrapy python

从技术上讲,数据将反映主表,只有两个链接包含其链接中的数据。

感谢任何帮助或指示。

1 个答案:

答案 0 :(得分:2)

这种情况下的问题在于这段代码

url = urljoin(response.url, site.xpath("td[2]/a/@href").extract_first())

url2 = urljoin(response.url, site.xpath("td[3]/a/@href").extract_first()

if url.endswith("html"):
    request=scrapy.Request(url, callback=self.parse_details)
    request.meta['item']=item
    request.meta['url2']=url2
    yield request
elif url2.endswith("html"):
    request=scrapy.Request(url2, callback=self.parse_details2)
    request.meta['item']=item
    yield request

else:
    yield item


def parse_details(self, response):
    item = response.meta["item"]
    url2 = response.meta["url2"]
    item['About Me'] = response.xpath("//p[contains(text(), 'About Me')]/following-sibling::p/text()").extract()
    if url2:
        request=scrapy.Request(url2, callback=self.parse_details2)
        request.meta['item']=item
        yield request
    else:
        yield item

通过请求链接,您正在创建一个新的"线程"这将采取自己的生活方式,所以,函数parse_details将无法看到parse_details2中正在做什么,我这样做的方式就是以这种方式在彼此之间调用一个

inheritance

此代码未经过彻底测试,因此在您测试时发表评论