Scrapy ItemLoader项目组合

时间:2016-10-30 04:50:51

标签: python json web-scraping scrapy scrapy-spider

我尝试使用ItemLoader将三个项目组合成一个数组,如下所示:

[
    {
        site_title: "Some Site Title",
        anchor_text: "Click Here",
        link: "http://example.com/page"
    }
]

正如您在下面的JSON中所看到的,它将所有类型的项目组合在一起。

我应该如何处理这个以输出JSON的数组,例如我正在寻找?

蜘蛛锉:

import scrapy
from linkfinder.items import LinkfinderItem
from scrapy.loader import ItemLoader

class LinksSpider(scrapy.Spider):
    name = "links"
    allowed_domains = ["wpseotest.com"]
    start_urls = ["https://wpseotest.com"]

    def parse(self, response):

        l = ItemLoader(item=LinkfinderItem(), response=response)
        l.add_xpath('site_title', '//title/text()')
        l.add_xpath('anchor_text', '//a//text()')
        l.add_xpath('link', '//a/@href')
        return l.load_item()

        pass

Items.py

import scrapy
from scrapy import item, Field

class LinkfinderItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    site_title = Field()
    anchor_text = Field()
    link = Field()
    pass

JSON输出

[
{"anchor_text": ["Globex Corporation", "Skip to content", "Home", "About", "Globex News", "Events", "Contact Us", "3999 Mission Boulevard,\r", "San Diego, CA 92109", "This is a test scheduled\u00a0post.", "Test Title", "Globex Subsidiary Ice Cream Inc. Creates Chicken Wing\u00a0Flavor", "Globex Inc.", "\r\n", "Blog at WordPress.com."], "link": ["https://wpseotest.com/", "#content", "https://wpseotest.com/", "https://wpseotest.com/about/", "https://wpseotest.com/globex-news/", "https://wpseotest.com/events/", "https://wpseotest.com/contact-us/", "http://maps.google.com/maps?z=16&q=3999+mission+boulevard,+san+diego,+ca+92109", "https://wpseotest.com/2016/08/19/this-is-a-test-scheduled-post/", "https://wpseotest.com/2016/06/28/test-title/", "https://wpseotest.com/2015/10/18/globex-subsidiary-ice-cream-inc-creates-chicken-wing-flavor/", "https://wpseotest.wordpress.com", "https://wordpress.com/?ref=footer_blog"], "site_title": ["Globex Corporation \u2013 We make things better, or, sometimes, worse."]}
]

2 个答案:

答案 0 :(得分:1)

你想在这里为每个链接产生一个项目吗? 为了得到你想要做的是找到文章节点,然后遍历它们并找到你以后合并到字典/ scrapy.Item中的字段。

def parse(self, response):
    site_title = response.xpath("//title/text()").extract_first() 
    links = response.xpath("//a")
    for link in links:
        l = ItemLoader(selector=link)
        l.add_value('site_title', site_title)
        l.add_xpath('anchor_text', 'text()')
        l.add_xpath('link', '@href')
        yield l.load_item()

现在你可以运行scrapy crawl myspider -o output.json,你应该得到类似的东西:

{[
    {"site_title": "title",
     "anchor_text": "foo",
     "link": "http://foo.com"},
    {"site_title": "title",
     "anchor_text": "bar",
     "link": "http://bar.com"}
    ...
  ]
}

答案 1 :(得分:0)

@Granitosaurus,我本来就是这样做的,甚至在使用items / itemloader并只是使用这种方法构建字典之前。

然后我发现了关于itemloader的信息,我说我应该使用它(认为它的性能更好)。嗯,这导致我得到了OP的结果,并试图弄清楚如何将其放回原处,以及在我自己构建自己的字典之前如何得到它。

现在,我倾向于仅在包含项目和项目加载器的情况下(与您的方法相同)执行此操作。

这似乎是最容易理解的。在我的示例中,我找到了

product_items = //div[contains(@class,"item-div")]

我遍历它们并提取产品详细信息。然后我只是将它们放入字典中。

        for item in product_items:
            name = item.xpath(product_name).extract_first()
            print(name)
            if name not in products:
                products[name] = {}
                products[name].update({'product_supplier': item.xpath(product_supplier).extract_first(),
                                       'product_weight': item.xpath(product_weight).extract_first(),
                                       'product_image': item.xpath(product_image).extract_first(),
                                       'a_level': item.xpath(a_level).extract_first(),
                                       'b_level': item.xpath(b_level).extract_first(),
                                       'price_tag': item.xpath(price_tag).extract_first().strip()
                                     })

现在使用items / itemloader,我将selector=link进行操作,它应该与我之前使用的方法类似。我想知道我是否在浪费时间尝试通过项目和项目加载器来工作。完全弄清楚这是Grimmy度过的美好时光。

我想最终我会使用管道或Feed导出。但是,除了代码看起来很干净之外,我还没有真正从网上找到任何好处。