使用Scrapy从两个页面中提取数据

时间:2016-02-23 10:27:41

标签: scrapy scrapy-spider

我有一个议程作为起始页。此页面包含活动的开始时间和标题以及每个活动详细信息页面的链接。

我的蜘蛛在每个单个事件的详细信息页面上提取所有事件详细信息(描述,位置等),但我必须在开始页面上提取的开始时间除外。

如何从起始页面提取开始时间以及在每个详细信息页面上提取其他数据? 什么是斗志昂扬的方式去?使用meta ['item']?我不明白...... 这是我现在的蜘蛛。任何帮助非常感谢!

class LuSpider(scrapy.Spider):
name = "lu"
allowed_domains = ["example.com"]
start_urls = ["http://www.example.com/agenda"]

def parse(self, response):  
    for href in response.css("div.toggle_container_show > div > a::attr('href')"):
        url = response.urljoin(href.extract())
        yield scrapy.Request(url, callback=self.parse_agenda_contents)

def parse_agenda_contents(self, response):
    for sel in response.xpath('//div[@class="container"]'):
        item = LuItem()
        item['EventTitle'] = sel.xpath('div[@class="content"]/div/div[@class="sliderContent"]/h1[@id]/text()').extract()
        item['Description'] = sel.xpath('div[@class="content"]/div/div[@class="sliderContent"]//p').extract()
        yield item

修改

我尝试使用request.meta['item']从起始页面提取开始时间,并在每个事件的起始页面中获取所有的开始时间列表。如何获得每个事件的开始时间? 有人能告诉我正确的方向吗?

class LuSpider(scrapy.Spider):
name = "lu"
allowed_domains = ["example.com"]
start_urls = ["http://www.example.com/agenda"]

def parse(self, response):
        item = LuItem()
        item['StartTime'] = response.xpath('//div[contains(., "H")]/span/text()').extract()

        for href in response.css("div.toggle_container_show > div > a::attr('href')"):
            url = response.urljoin(href.extract())
            request = scrapy.Request(url, callback=self.parse_agenda_contents)
            request.meta['item'] = item         
            yield request

def parse_agenda_contents(self, response):
    for sel in response.xpath('//div[@class="container"]'):
        item = response.meta['item']
        item['EventTitle'] = sel.xpath('div[@class="content"]/div/div[@class="sliderContent"]/h1[@id]/text()').extract()
        item['Description'] = sel.xpath('div[@class="content"]/div/div[@class="sliderContent"]//p').extract()
        yield item

2 个答案:

答案 0 :(得分:3)

你是对的。使用meta会在你的情况下做到这一点。请参阅此处的官方文档:http://doc.scrapy.org/en/latest/topics/request-response.html#passing-additional-data-to-callback-functions

def parse_page1(self, response):
  item = MyItem()
  item['main_url'] = response.url
  request = scrapy.Request("http://www.example.com/some_page.html",
                         callback=self.parse_page2)
  request.meta['item'] = item
  return request

def parse_page2(self, response):
  item = response.meta['item']
  item['other_url'] = response.url
  return item

答案 1 :(得分:1)

这有效:

class LuSpider(scrapy.Spider):
name = "lu"
allowed_domains = ["example.com"]
start_urls = ["http://www.example.com/agenda"]

def parse(self, response):
StartTimes = response.xpath('//div[@class="toggle_container_show"]/div/span/text()').extract()
urls =response.xpath('//div[@class="toggle_container_show"]/div/a/@href').extract()

for StartTime,url in zip(StartTimes,urls):
    item = LuItem()
    item['StartTime'] = StartTime
    request = Request(url,callback = self.parse_agenda_contents)
    request.meta['item'] = item
    yield request

def parse_agenda_contents(self, response):
for sel in response.xpath('//div[@class="container"]'):
    item = response.meta['item']
    item['EventTitle'] = sel.xpath('div[@class="content"]/div/div[@class="sliderContent"]/h1[@id]/text()').extract()
    item['Description'] = sel.xpath('div[@class="content"]/div/div[@class="sliderContent"]//p').extract()
    yield item