Make Scrapy关注链接并收集数据

时间:2015-05-10 13:56:40

标签: python web-scraping web-crawler scrapy

我正在尝试在Scrapy中编写程序来打开链接并从此标记中收集数据:<div> <p><label>Faculty <input type="text" class = "f"></label></p> <p><label >Department<input type="text" class = "f"></label></p> </div>

我设法让Scrapy收集来自给定网址的所有链接但不关注它们。非常感谢任何帮助。

1 个答案:

答案 0 :(得分:15)

您需要为要关注的链接生成Request个实例,分配回调并在回调中提取所需p元素的文本:

# -*- coding: utf-8 -*-
import scrapy


# item class included here 
class DmozItem(scrapy.Item):
    # define the fields for your item here like:
    link = scrapy.Field()
    attr = scrapy.Field()


class DmozSpider(scrapy.Spider):
    name = "dmoz"
    allowed_domains = ["craigslist.org"]
    start_urls = [
    "http://chicago.craigslist.org/search/emd?"
    ]

    BASE_URL = 'http://chicago.craigslist.org/'

    def parse(self, response):
        links = response.xpath('//a[@class="hdrlnk"]/@href').extract()
        for link in links:
            absolute_url = self.BASE_URL + link
            yield scrapy.Request(absolute_url, callback=self.parse_attr)

    def parse_attr(self, response):
        item = DmozItem()
        item["link"] = response.url
        item["attr"] = "".join(response.xpath("//p[@class='attrgroup']//text()").extract())
        return item