我编写的网络搜寻器存在一些问题。我想保存获取的数据。如果我从简单的教程中正确理解了,我只需要屈服它,然后使用scrapy crawl <crawler> -o file.csv -t csv
启动搜寻器,对吗?由于某种原因,文件保持为空。这是我的代码:
# -*- coding: utf-8 -*-
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
class PaginebiancheSpider(CrawlSpider):
name = 'paginebianche'
allowed_domains = ['paginebianche.it']
start_urls = ['https://www.paginebianche.it/aziende-clienti/lombardia/milano/comuni.htm']
rules = (
Rule(LinkExtractor(allow=(), restrict_css = ('.seo-list-name','.seo-list-name-up')),
callback = "parse_item",
follow = True),)
def parse_item(self, response):
if(response.xpath("//h2[@class='rgs']//strong//text()") != [] and response.xpath("//span[@class='value'][@itemprop='telephone']//text()") != []):
yield ' '.join(response.xpath("//h2[@class='rgs']//strong//text()").extract()) + " " + response.xpath("//span[@class='value'][@itemprop='telephone']//text()").extract()[0].strip(),
我正在使用python 2.7
答案 0 :(得分:1)
如果您查看Spider的输出,则会看到一堆类似这样的错误消息:
2018-10-20 13:47:52 [scrapy.core.scraper] ERROR: Spider must return Request, BaseItem, dict or None, got 'tuple' in <GET https://www.paginebianche.it/lombardia/abbiategrasso/vivai-padovani.html>
这意味着您没有得到正确的结果-您需要字典或Item
而不是正在创建的单项元组。
像这样简单的事情应该起作用:
yield {
'name': response.xpath("normalize-space(//h2[@class='rgs'])").get(),
'phone': response.xpath("//span[@itemprop='telephone']/text()").get()
}