scrapy python递归查找href引用

时间:2017-01-26 10:00:02

标签: python scrapy href

我尝试进行scrapy,从起始页面查找并打印所有href:

class Ejercicio2(scrapy.Spider):
    name = "Ejercicio2"
    Ejercicio2 = {}
    category = None
    lista_urls =[] #defino una lista para meter las urls

def __init__(self, *args, **kwargs):
    super(Ejercicio2, self).__init__(*args, **kwargs)
    self.start_urls = ['http://www.masterdatascience.es/']
    self.allowed_domains = ['www.masterdatascience.es/']
    url = ['http://www.masterdatascience.es/']


def parse(self, response):
    print(response)
    # hay_enlace=response.css('a::attr(href)')
    # if hay_enlace:
    links = response.xpath("a/@href")
    for el in links:
        url = response.css('a::attr(href)').extract()
        print(url)
        next_url = response.urljoin(el.xpath("a/@href").extract_first())
        print(next_url)
        print('pasa por aqui')
        yield scrapy.Request(url, self.parse())
        # yield scrapy.Request(next_url, callback=self.parse)
        print(next_url)

但是没有按预期工作,没有遵循" href"遇到的引用,只有第一个。

2 个答案:

答案 0 :(得分:0)

下面的代码将打印出页面上的所有href:

import scrapy

class stackoverflow20170129Spider(scrapy.Spider):
    name = "stackoverflow20170129"
    allowed_domains = ["masterdatascience.es"]
    start_urls = ["http://www.masterdatascience.es/",]

    def parse(self, response):
        for href in response.xpath('//a/@href'):
           url = response.urljoin(href.extract())
           print url
#           yield scrapy.Request(url, callback=self.parse_dir_contents)
还有一件事:值得放弃www。来自“allowed_domains” - 如果你深入到网站并开始访问诸如anewpage.masterdatascience.es之类的页面,然后包括www。将阻止该页面

答案 1 :(得分:-2)

您可以尝试将xpath修改为// a / @ href