scrapy - 如何使用pandas数据框中的数据填充项目?

时间:2016-06-17 13:20:02

标签: python pandas scrapy

假设以下CrawlSpider:

import scrapy
from scrapy.loader import ItemLoader
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from tutorial.items import TestItem
from scrapy.http import HtmlResponse


class TestCrawlSpider(CrawlSpider):
    name = "test_crawl"
    allowed_domains = ["www.immobiliare.it"]
    start_urls = [
        "http://www.immobiliare.it/Roma/case_in_vendita-Roma.html?criterio=rilevanza",
        "http://www.immobiliare.it/Napoli/case_in_vendita-Napoli.html?criterio=rilevanza"
    ]

    rules = (
        Rule(LinkExtractor(allow=(), restrict_xpaths=('//a[@class="no-decoration button next_page_act"]',)), callback="parse_start_url", follow= True),
    )


    def parse_start_url(self, response):
        for selector in response.css('div.content'):
            l = ItemLoader(item=TestItem(), selector=selector)
            l.add_css('Price', '.price::text')
            l.add_value('City', '...')
            l.add_value('Longitude', '...')
            l.add_value('Latitude', '...')
            yield l.load_item()

和相应的items.py:

import scrapy
from scrapy.loader import ItemLoader
from scrapy.loader.processors import TakeFirst, MapCompose, Join

class TestItem(scrapy.Item):
    Price = scrapy.Field(
        output_processor=MapCompose(unicode.strip),
    )
    City = scrapy.Field(serializer=str)
    Latitude = scrapy.Field(serializer=str)
    Longitude = scrapy.Field(serializer=str)

对于每个start_url,我都有相应的地理信息('City','Longitude','Latitude')存储在pandas数据帧中。对于上面的示例,数据框如下所示:

     City Latitude Longitude
0    Roma    40.85     14.30
1  Napoli    41.53     12.30

如何使用数据框中存储的信息填充“城市”,“经度”,“纬度”项?

1 个答案:

答案 0 :(得分:3)

我会使用start_requests()方法填充每个城市的meta信息,通过.to_dict()将数据框转储到字典中以简化查找:

def start_requests(self):
    df = pd.DataFrame(...)

    # make a dictionary, City -> City info
    d = df.set_index('City').to_dict()

    pattern = re.compile(r"http://www.immobiliare.it/(\w+)/")
    for url in self.start_urls:
        city = pattern.search(url).group(1)
        yield scrapy.Request(url, meta={"info": d[city]})

然后,在回调中,从response.meta

获取信息词典
def parse_start_url(self, response):
    info = response.meta["info"]
    for selector in response.css('div.content'):
        l = ItemLoader(item=TestItem(), selector=selector)
        l.add_css('Price', '.price::text')
        l.add_value('City', info['City'])
        l.add_value('Longitude', info['Longitude'])
        l.add_value('Latitude', info['Latitude'])
        yield l.load_item()

未经测试。