如何通过多个URL循环从Scrapy中的CSV文件中抓取?

时间:2018-08-19 04:08:33

标签: python csv web-scraping scrapy

我从阿里巴巴网站抓取数据的代码:

import scrapy




class IndiamartSpider(scrapy.Spider):
    name = 'alibot'
    allowed_domains = ['alibaba.com']
    start_urls = ['https://www.alibaba.com/showroom/acrylic-wine-box_4.html']


    def parse(self, response):
        Title = response.xpath('//*[@class="title three-line"]/a/@title').extract()
        Price = response.xpath('//div[@class="price"]/b/text()').extract()
        Min_order = response.xpath('//div[@class="min-order"]/b/text()').extract()
        Response_rate = response.xpath('//i[@class="ui2-icon ui2-icon-skip"]/text()').extract()

        for item in zip(Title,Price,Min_order,Response_rate):
            scraped_info = {
                'Title':item[0],
                'Price': item[1],
                'Min_order':item[2],
                'Response_rate':item[3]

            }
            yield scraped_info

请注意起始网址,它只会通过给定的网址进行抓取,但是我希望这段代码可以删除csv文件中存在的所有网址。我的csv文件包含大量URL。 data.csv文件的示例::

'https://www.alibaba.com/showroom/shock-absorber.html',
'https://www.alibaba.com/showroom/shock-wheel.html',
'https://www.alibaba.com/showroom/shoes-fastener.html',
'https://www.alibaba.com/showroom/shoes-women.html',
'https://www.alibaba.com/showroom/shoes.html',
'https://www.alibaba.com/showroom/shoulder-long-strip-bag.html',
'https://www.alibaba.com/showroom/shower-hair-band.html',
...........

我如何一次在代码中导入csv文件的所有链接?

3 个答案:

答案 0 :(得分:2)

要正确遍历文件而不将其全部加载到内存中,您应该使用生成器,因为python / scrapy中的文件对象和start_requests方法都是生成器:

class MySpider(Spider):
    name = 'csv'

    def start_requests(self):
        with open('file.csv') as f:
            for line in f:
                if not line.strip():
                    continue
                yield Request(line)

进一步解释: Scrapy引擎使用start_requests来生成请求。它将继续生成请求,直到并发请求限制已满(诸如CONCURRENT_REQUESTS之类的设置)。
另外值得一提的是,默认情况下,scrapy的爬网深度优先-新请求优先,因此start_requests循环将最后完成。

答案 1 :(得分:1)

您已经快要在那里了。唯一的变化是在[error] both value ev of type apiserver.ToJSON[T] [error] and value evidence$1 of type apiserver.ToJSON[T] [error] match expected type apiserver.ToJSON[T] [error] JArray(l.map(implicitly[ToJSON[T]].toJSON(_))) 中,您希望将其作为“ * .csv文件中的所有URL”。以下代码轻松实现了这一更改。

start_urls

答案 2 :(得分:0)

让我们假设您已经以数据帧的形式存储了 url 列表,并且您想要遍历数据帧中存在的每个 URL。下面给出了我的方法,这些方法对我有用。

class IndiamartSpider(scrapy.Spider):
    name = 'alibot'
    #allowed_domains = ['alibaba.com']
    #start_urls = ['https://www.alibaba.com/showroom/acrylic-wine-box_4.html']
    

    def start_requests(self):
        df = pd.read_csv('fileContainingUrls.csv')
        #Here fileContainingUrls.csv is a csv file which has a column named as 'URLS'
        # contains all the urls which you want to loop over. 
        urlList = df['URLS'].to_list()
        for i in urlList:
             yield scrapy.Request(url = i, callback=self.parse)

    def parse(self, response):
       Title = response.xpath('//*[@class="title three-line"]/a/@title').extract()
       Price = response.xpath('//div[@class="price"]/b/text()').extract()
       Min_order = response.xpath('//div[@class="min-order"]/b/text()').extract()
    
       for item in zip(Title,Price,Min_order,Response_rate):
           scraped_info = {
               'Title':item[0],
               'Price': item[1],
               'Min_order':item[2],
               'Response_rate':item[3]

           }
           yield scraped_info