使用XMLFeedSpider来解析html和xml

时间:2016-11-03 14:11:53

标签: html xml scrapy web-crawler

我有一个网页,我从中获取RSS链接。链接是XML,我想使用XMLFeedSpider功能来简化解析。

这可能吗?

这将是流程:

  • 获取example.com/rss(返回HTML)
  • 解析html并获取RSS链接
  • foreach链接解析XML

1 个答案:

答案 0 :(得分:0)

我找到了一种基于现有example in the documentation的简单方法,并查看了源代码。这是我的解决方案:

from scrapy.spiders import XMLFeedSpider
from myproject.items import TestItem

class MySpider(XMLFeedSpider):
    name = 'example.com'
    allowed_domains = ['example.com']
    start_urls = ['http://www.example.com/feed.xml']
    iterator = 'iternodes'  # This is actually unnecessary, since it's the default value
    itertag = 'item'

    def start_request(self):
        urls = ['http://www.example.com/get-feed-links']
        for url in urls:
            yield scrapy.Request(url=url, callback=self.parse_main)

    def parse_main(self, response):
        for el in response.css("li.feed-links"):
            yield scrapy.Request(el.css("a::attr(href)").extract_first(),
                                 callback=self.parse)

    def parse_node(self, response, node):
        self.logger.info('Hi, this is a <%s> node!: %s', self.itertag,     ''.join(node.extract()))

        item = TestItem()
        item['id'] = node.xpath('@id').extract()
        item['name'] = node.xpath('name').extract()
        item['description'] = node.xpath('description').extract()
        return item