将动态项目从Scrapy导出到csv

时间:2019-01-19 21:57:49

标签: python csv scrapy export-to-csv scrapy-spider

我有蜘蛛刮擦一些像这样的动感领域:

class exhibitors_spider(scrapy.Spider):
name = "exhibitors"

url = "some url"

def _create_item_class(self, class_name, field_list):
    field_dict = {}
    for field_name in field_list:
        field_dict[field_name] = Field()
    return type(str(class_name), (DictItem,), {'fields': field_dict})

def start_requests(self):
    yield Request(url=self.url, callback=self.parse_page, dont_filter=True)

def parse_page(self, response):
        Contact_Persons = {}
        Contact_Persons_blocks = response.selector.xpath("//h2[contains(text(),'Contact person')]/..//..//div/.//li")
        if Contact_Persons_blocks:
            for i in xrange(1, len(Contact_Persons_blocks) + 1):
                cp_name = Contact_Persons_blocks[i - 1].xpath(".//a[@itemprop='name']/bdi/text()").extract_first()
                if cp_name:
                    cp_name = capwords(cp_name.encode('utf-8'))
                else:
                    cp_name = 0
                Contact_Persons.update({"Contact_Person_Name_{}".format(i): cp_name})

                cp_title = Contact_Persons_blocks[i - 1].xpath(".//div[@itemprop='jobTitle']/text()").extract_first()
                if cp_title:
                    cp_title = capwords(cp_title.encode('utf-8'))
                else:
                    cp_title = 0
                Contact_Persons.update({"Contact_Person_Title_{}".format(i): cp_title})

                cp_link = Contact_Persons_blocks[i - 1].xpath(".//a[@class='ngn-mail-link']/@href").extract_first()
                if cp_link:
                    cp_link = self.domain + cp_link
                else:
                    cp_link = 0
                Contact_Persons.update({"Contact_Person_Link{}".format(i): cp_link})

        ExhibitorsItem = self._create_item_class('ExhibitorsItem', Contact_Persons.keys())

        for cp_key in Contact_Persons.keys():
            item[cp_key] = Contact_Persons[cp_key]
        yield item

但是我不预先知道会有多少个项目,除了每页上还有不同数量的项目。当我导出为CSV时,似乎Scrapy用第一项的键创建了一个文件。我的意思是,它会写入所有值,但是例如,如果第一项具有1个键,那么我们在CSV中将只有一个键,如果其余项具有更多键,则它们将以错误的顺序排列。如何使Scrapy从具有最多键数的项目中创建CSV文件?

0 个答案:

没有答案