使用scrapy无法抓取网站上的所有链接

时间:2018-08-19 11:40:45

标签: python xpath web-scraping scrapy

我试图删除所有也在网站上分页的链接。下面给出的是我的草率代码,但是该代码无法正常工作。它只是从第一页抓取网址链接。如何取消所有链接?谢谢

# -*- coding: utf-8 -*-
import scrapy


class DummySpider(scrapy.Spider):
    name = 'dummyspider'
    allowed_domains = ['alibaba.com']
    start_urls = ['https://www.alibaba.com/countrysearch/CN/China/products/A.html'
                ]

    def parse(self, response):
        link = response.xpath('//*[@class="column one3"]/a/@href').extract()

        for item in zip(link):
            scraped_info = {
                'link':item[0],

            }
            yield scraped_info
        next_page_url = response.xpath('//*[@class="page_btn"]/@href').extract_first()
        if next_page_url:
            next_page_url = response.urljoin(next_page_url)
            yield scrapy.Request(url = next_page_url, callback = self.parse)

起始网址为https://www.alibaba.com/countrysearch/CN/China/products/A.html

1 个答案:

答案 0 :(得分:2)

您可以通过正确设置起始网址来解决此问题。

string模块具有字母常量:

$ import string
$ string.ascii_uppercase
'ABCDEFGHIJKLMNOPQRSTUVWXYZ'

您可以用来以编程方式创建网址:

import string
from scrapy import Spider  

class MySpider(Spider):
    name = 'alibaba'
    start_urls = [
        f'http://foo.com?letter={char}' 
        for char in string.ascii_uppercase
    ]
相关问题