如何在scrapy项目导出中每次都能覆盖文件?

时间:2016-10-30 09:30:47

标签: python csv scrapy scrapy-spider scrapy-pipeline

我正在抓取一个在urls列表中返回的网站。 示例 - scrapy crawl xyz_spider -o urls.csv

现在我的工作非常好,我希望新的urls.csv不会将data附加到文件中。我可以做任何参数传递使其启用吗?

3 个答案:

答案 0 :(得分:2)

不幸的是,scrapy目前无法做到这一点 github上有一个建议的增强功能:https://github.com/scrapy/scrapy/issues/547

但是,您可以轻松地将输出重定向到stdout并将其重定向到文件:

scrapy crawl myspider -t json --nolog -o - > output.json

-o -表示输出为减号和减号,在这种情况下表示stdout 在运行scrapy之前,您还可以创建一些别名来删除文件,例如:

alias sc='-rm output.csv && scrapy crawl myspider -o output.csv'

答案 1 :(得分:2)

我通常通过将Scrapy作为python脚本运行并在调用Spider类之前打开文件来处理自定义文件导出。这为处理和格式化csv文件提供了更大的灵活性,甚至可以将它们作为Web应用程序的扩展或将其运行到云中运行。以下内容:

import csv

if __name__ == '__main__':            
        process = CrawlerProcess()

        with open('Output.csv','wb') as output_file:
            mywriter = csv.write(output_file)
            process.crawl(Spider_Class, start_urls = start_urls)
            process.start() 
            process.close()                             

答案 2 :(得分:0)

您可以打开该文件并将其关闭,以便删除该文件的内容。

class RestaurantDetailSpider(scrapy.Spider):

    file = open('./restaurantsLink.csv','w')
    file.close()
    urls = list(open('./restaurantsLink.csv')) 
    urls = urls[1:]
    print "Url List Found : " + str(len(urls))

    name = "RestaurantDetailSpider"
    start_urls = urls

    def safeStr(self, obj):
        try:
            if obj == None:
                return obj
            return str(obj)
        except UnicodeEncodeError as e:
            return obj.encode('utf8', 'ignore').decode('utf8')
        return ""

    def parse(self, response):
        try :
            detail = RestaurantDetailItem()
            HEADING = self.safeStr(response.css('#HEADING::text').extract_first())
            if HEADING is not None:
                if ',' in HEADING:
                    HEADING = "'" + HEADING + "'"
                detail['Name'] = HEADING

            CONTACT_INFO = self.safeStr(response.css('.directContactInfo *::text').extract_first())
            if CONTACT_INFO is not None:
                if ',' in CONTACT_INFO:
                    CONTACT_INFO = "'" + CONTACT_INFO + "'"
                detail['Phone'] = CONTACT_INFO

            ADDRESS_LIST = response.css('.headerBL .address *::text').extract()
            if ADDRESS_LIST is not None:
                ADDRESS = ', '.join([self.safeStr(x) for x in ADDRESS_LIST])
                ADDRESS = ADDRESS.replace(',','')
                detail['Address'] = ADDRESS

            EMAIL = self.safeStr(response.css('#RESTAURANT_DETAILS .detailsContent a::attr(href)').extract_first())
            if EMAIL is not None:
                EMAIL = EMAIL.replace('mailto:','')
                detail['Email'] = EMAIL

            TYPE_LIST = response.css('.rating_and_popularity .header_links *::text').extract()
            if TYPE_LIST is not None:
                TYPE = ', '.join([self.safeStr(x) for x in TYPE_LIST])
                TYPE = TYPE.replace(',','')
                detail['Type'] = TYPE

            yield detail
        except Exception as e:
            print "Error occure"
            yield None

    scrapy crawl RestaurantMainSpider  -t csv -o restaurantsLink.csv

这将创建restaurantsLink.csv文件 我在下一个蜘蛛RestaurantDetailSpider中使用它。

所以你可以运行以下命令 - 它将删除并创建一个新的文件restaurantsLink.csv,我们将在上面的蜘蛛中使用它,每当我们运行蜘蛛时它都会被覆盖:

rm restaurantsLink.csv && scrapy crawl RestaurantMainSpider -o restaurantsLink.csv -t csv