如何遍历整个域而不是提供单个链接

时间:2019-04-10 19:31:21

标签: python scrapy

当前,我们的Spider会处理一系列硬编码的url,希望将其更改为仅适用于主域。

我们如何更改以下代码以仅预期域

https://www.example.com/shop/

如果有一个很好的例子来源很好的例子。

    def start_requests(self):
        urls = [
#                'https://www.example.com/shop/outdoors-unknown-hart-creek-fleece-hoodie',
                'https://www.example.com/shop/adidas-unknown-essentials-cotton-fleece-3s-over-head-hoodie#repChildCatSku=111767466',
                'https://www.example.com/shop/unknown-metallic-long-sleeve-shirt#repChildCatSku=115673740',
                'https://www.example.com/shop/unknown-fleece-full-zip-hoodie#repChildCatSku=111121673',
                'https://www.example.com/shop/unknown-therma-fleece-training-hoodie#repChildCatSku=114784077',
                'https://www.example.com/shop/under-unknown-rival-fleece-crew-sweater#repChildCatSku=114636980',
                'https://www.example.com/shop/unknown-element-1-2-zip-top#repChildCatSku=114794996',
                'https://www.example.com/shop/unknown-element-1-2-zip-top#repChildCatSku=114794996',
                'https://www.example.com/shop/under-unknown-rival-fleece-full-zip-hoodie#repChildCatSku=115448841',
                'https://www.example.com/shop/under-unknown-rival-fleece-crew-sweater#repChildCatSku=114636980',
                'https://www.example.com/shop/adidas-unknown-essentials-3-stripe-fleece-sweatshirt#repChildCatSku=115001812',
                'https://www.example.com/shop/under-unknown-fleece-logo-hoodie#repChildCatSku=115305875',
                'https://www.example.com/shop/under-unknown-heatgear-long-sleeve-shirt#repChildCatSku=107534192',
                'https://www.example.com/shop/unknown-long-sleeve-legend-hoodie#repChildCatSku=112187421',
                'https://www.example.com/shop/unknown-element-1-2-zip-top#repChildCatSku=114794996',
                'https://www.example.com/shop/unknown-sportswear-funnel-neck-hoodie-111112208#repChildCatSku=111112208',
                'https://www.example.com/shop/unknown-therma-swoosh-fleece-training-hoodie#repChildCatSku=114784481',
            ]
        for url in urls:
                yield scrapy.Request(url=url, callback=self.parse)

    def parse(self, response):
            page = response.url.split("/")[-1]
            filename = 'academy-%s.txt' % page
            res2 = response.xpath("//span[@itemprop='price']/text()|//span[@itemprop='sku']/text()").extract()             

            res = '\n'.join(res2)

            with open(filename, 'w') as f:
                    f.write(res)
                    self.log('Saved file %s' % filename)

1 个答案:

答案 0 :(得分:1)

仅用于纯遍历就可以:

class MySpider(scrapy.Spider):
    name = 'my'
    allowed_domains = ['example.com']
    start_urls = ['https://www.example.com/shop/']

    def parse(self, response):
        for link in response.css('a'):
            yield response.follow(link)

但是这个任务似乎毫无意义。您能详细说明问题吗?