有什么办法可以在Scrapy的Crawlspider中的锚标记中获取文本?

时间:2019-04-01 08:02:44

标签: python scrapy

我有一个crawlspider,它可以将给定的站点爬到特定的部门,并在该站点上下载pdf。一切正常,但连同pdf链接,我也需要锚标记内的文本。

例如:

<a href='../some/pdf/url/pdfname.pdf'>Project Report</a>

考虑这个锚标记,在回调中我得到响应对象,并且与此对象一起,我需要在该标记内添加文本,例如“ Project Report”。 有什么方法可以将这些信息与响应对象一起获得。我已经通过https://docs.scrapy.org/en/latest/topics/selectors.html链接,但这不是我要找的东西。

示例代码:

class DocumunetPipeline(scrapy.Item):
    document_url = scrapy.Field()
    name = scrapy.Field()  # name of pdf/doc file
    depth = scrapy.Field()

class MySpider(CrawlSpider):
    name = 'pdf'
    start_urls = ['http://www.someurl.com']
    allowed_domains = ['someurl.com']
    rules = (
        Rule(LinkExtractor(tags="a", deny_extensions=[]),
             callback='parse_document', follow=True),
    )


    def parse_document(self, response):
        content_type = (response.headers
                        .get('Content-Type', None)
                        .decode("utf-8"))
        url = response.url
        if content_type == "application/pdf":
            name = response.headers.get('Content-Disposition', None)
            document = DocumunetPipeline()
            document['document_url'] = url
            document['name'] = name
            document['depth'] = response.meta.get('depth', None)
            yield document

2 个答案:

答案 0 :(得分:2)

似乎没有记录,但是meta属性确实包含链接文本。它在this line中进行了更新。 一个最小的例子是:

from scrapy.spiders import Rule, CrawlSpider
from scrapy.linkextractors import LinkExtractor


class LinkTextSpider(CrawlSpider):
    name = 'linktext'
    start_urls = ['https://example.org']
    rules = [
        Rule(LinkExtractor(), callback='parse_document'),
    ]

    def parse_document(self, response):
        return dict(
            url=response.url,
            link_text=response.meta['link_text'],
        )

产生的输出类似于:

2019-04-01 12:03:30 [scrapy.core.engine] INFO: Spider opened
2019-04-01 12:03:30 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-04-01 12:03:30 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2019-04-01 12:03:31 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://example.org> (referer: None)
2019-04-01 12:03:32 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://www.iana.org/domains/reserved> from <GET http://www.iana.org/domains/example>
2019-04-01 12:03:33 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.iana.org/domains/reserved> (referer: None)
2019-04-01 12:03:33 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.iana.org/domains/reserved>
{'url': 'https://www.iana.org/domains/reserved', 'link_text': 'More information...'}
2019-04-01 12:03:33 [scrapy.core.engine] INFO: Closing spider (finished)

答案 1 :(得分:0)

我认为,实现这一目标的最佳方法是不使用爬网规则,而是使用您自己的parse_*方法来处理所有响应的用户常规爬网。

然后,当您产生一个具有parse_document作为回调的请求时,可以在请求的meta参数中包含链接文本,并在{ {1}}方法。

response.meta