如何在scrapy-python中将每个StartURL的多个URL保存为文件?

时间:2018-06-18 09:33:42

标签: python python-3.x web-scraping scrapy web-crawler

我有以下代码:

class VoteSpider(scrapy.Spider):
    name = "test"

    def start_requests(self):

        self.start_url = [
            "http://www.domain.de/URI.html?get=1&getX=2",
            "http://www.domain.de/URI.html?get=2&getX=3",
            "http://www.domain.de/URI.html?get=3&getX=4",
            "http://www.domain.de/URI.html?get=4&getX=5"            
        ]

        for url in self.start_url:
            self.a = 0
            self.url = url
            self.page = self.url.split("/")[-1]
            self.filename = '%s.csv' % self.page
            with open(self.filename, 'w') as f:
                f.write('URL:;'+self.url+'\n')

            yield scrapy.Request(url=self.url,callback=self.parse,dont_filter = True)

    def parse(self, response):
        sel = Selector(response)

        votes = sel.xpath('//div[contains(@class,"ratings")]/ul')

        with open(self.filename, 'a') as f:
            for vote in votes:
                self.a+=1
                f.write(str(self.a)+';'+vote.xpath('./li/text()').extract())

        if len(votes.xpath('//a[contains(@class,"next")]/@href').extract()) != 0:
            next_page = votes.xpath('//a[contains(@class,"next")]/@href').extract()[0]
            if next_page is not None:
                yield response.follow(next_page, callback=self.parse, dont_filter=True)

我的问题是,带有此代码的帽子一切都将保存在一个文件中,在上面的示例中将是:

URI.html?get=1&getX=2.csv

由于我在多个URL上运行循环并为每个URL创建一个新文件名,我想知道出了什么问题。

为什么此代码不会为每个网址创建新文件?

for url in self.start_url:
    self.a = 0
    self.url = url
    self.page = self.url.split("/")[-1]
    self.filename = '%s.csv' % self.page
    with open(self.filename, 'w') as f:
        f.write('URL:;'+self.url+'\n')

有人能以正确的方式向我展示如何为每个起始网址保存文件吗?请考虑一下,我还希望将以下页面添加到文件中,直到没有页面可以关注为止。

修改

问题不在于没有创建文件。

的所有内容
with open(self.filename, 'a') as f:
            for vote in votes:
                self.a+=1
                f.write(str(self.a)+';'+vote.xpath('./li/text()').extract())

保存到一个文件中而不是4个文件中。全部将保存到第一个可用的StartURL

EDIT2:

这个想法很好!但是从我的例子来看,它无法取代:

file_name = '%s.csv' % response.url.split("/")[-1]

因为URI正在改变,并且每个新URI都是创建的新文件。

startURL 1     - "http://www.domain.de/URI.html?get=1&getX=2"
response.url 2 - "http://www.domain.de/URI.html?get=2&getX=2"
response.url 3 - "http://www.domain.de/URI.html?get=3&getX=2"

我只想在startURL中保存所有内容。

startURL 1     saved to "http://www.domain.de/URI.html?get=1&getX=2.csv"
response.url 2 saved to "http://www.domain.de/URI.html?get=1&getX=2.csv"
response.url 3 saved to "http://www.domain.de/URI.html?get=1&getX=2.csv"

一个不可靠的解决方案是按条件映射名称,但如果startURL数量增加或起始URL的结构发生变化则不实用:

if response.url.find("getX=2"):
    filename = self.start_url[0].split('/')[-1]
if response.url.find("getX=3"):
    filename = self.start_url[1].split('/')[-1]
if response.url.find("getX=4"):
    filename = self.start_url[2].split('/')[-1]
...

我不明白为什么self.filename未正确传递给self.parse()?是否有一些多处理因此self.filename总是被第一项覆盖?如何在不使用响应对象的情况下转发正确的文件名?

SOLUTION:

我通过request.meta传递价值:

class VoteSpider(scrapy.Spider):
    name = "test2"

    def start_requests(self):

        self.start_url = [
            "http://www.domain.de/URI.html?get=1&getX=2",
            "http://www.domain.de/URI.html?get=2&getX=3",
            "http://www.domain.de/URI.html?get=3&getX=4",
            "http://www.domain.de/URI.html?get=4&getX=5"          
        ]

        for url in self.start_url:
            self.a = 0
            self.url = url
            self.page = self.url.split("/")[-1]
            self.filename = '%s.csv' % self.page
            with open(self.filename, 'w') as f:
                f.write('URL:;'+self.url+'\n')
            request = scrapy.Request(url=self.url,callback=self.parse,dont_filter = True)
            request.meta['url'] = url
            yield request

    def parse(self, response):
        sel = Selector(response)

        votes = sel.xpath('//div[contains(@class,"ratings")]/ul')

        self.file = response.meta['url']
        filename = self.file.split("/")[-1]+'.csv'
        with open(filename, 'a') as f:
            for vote in votes:
                self.a+=1
                f.write(str(self.a)+';'+votes.xpath('./li/text()').extract()[0])

        if len(votes.xpath('//a[contains(@class,"next")]/@href').extract()) != 0:
            next_page = votes.xpath('//a[contains(@class,"next")]/@href').extract()[0]
            if next_page is not None:
                request = response.follow(next_page, callback=self.parse, dont_filter=True)
                request.meta['url'] = self.file
                yield request 

1 个答案:

答案 0 :(得分:1)

而不是:

with open(self.filename, 'a') as f:
    ...

尝试使用此功能,file_name将是当前request.url,例如:URI.html?get=1&getX=2.csv

file_name = '%s.csv' % response.url.split("/")[-1]
with open(file_name, 'a') as f:
    ...