scrapy没有保存在csv文件中。怎么了?

时间:2014-02-01 08:44:03

标签: scrapy

我尝试运行此代码并将其保存为csv文件,但在csv中不包含任何内容。代码有什么问题吗?请帮忙。提前致谢

from scrapy.spider import Spider
from scrapy.selector import Selector
from amazon.items import AmazonItem

class AmazonSpider(Spider):
name = "amazon"
allowed_domains = ["amazon.com"]
start_urls = [

"http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=9780316324106",
"http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=9780307959478",
"http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=9780345549334"

]

def parse(self, response):
   sel = Selector(response)
   sites = sel.xpath('//div[@class="fstRow prod celwidget"]')
   items = []
   for site in sites:
       item = AmazonItem()
       item['url'] = response.url
       item['price'] = site.xpath('//ul[@class="rsltL"]/li[5]/a/span/text()')
       if item['price']:
          item['price'] = item['price'].extract()[0]
       else:
          item['price'] = "NA"
          items.append(item)
   return items

如果找不到项目,我想保存,然后用“NA”字符替换。

当我在下面尝试此代码时,它的工作正常:

from scrapy.spider import Spider
from scrapy.selector import Selector
from amazon.items import AmazonItem

class AmazonSpider(Spider):
name = "amazon"
allowed_domains = ["amazon.com"]
start_urls = [

"http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=9780316324106",
"http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=9780307959478",
"http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=9780345549334"

]

def parse(self, response):
   sel = Selector(response)
   sites = sel.xpath('//div[@class="fstRow prod celwidget"]')
   items = []
   for site in sites:
       item = AmazonItem()
       item['url'] = response.url
       item['price'] = site.xpath('//ul[@class="rsltL"]/li[5]/a/span/text()')
          items.append(item)
   return items

这部分有什么问题?还是我忘记了什么?

       if item['price']:
      item['price'] = item['price'].extract()[0]
   else:
      item['price'] = "NA"

我是新手。你能帮我吗?非常感谢你

2 个答案:

答案 0 :(得分:1)

在您的第一个代码示例中看起来items.append (item)缩进太多了。

这会使其成为价格检查的else块的一部分,因此除非没有设定价格,否则不会将任何商品添加到items列表中。

from scrapy.spider import Spider
from scrapy.selector import Selector
from amazon.items import AmazonItem

class AmazonSpider(Spider):
    name = "amazon"
    allowed_domains = ["amazon.com"]
    start_urls = [

    "http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=9780316324106",
    "http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=9780307959478",
    "http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=9780345549334"

    ]

    def parse(self, response):
        sel = Selector(response)
        sites = sel.xpath('//div[@class="fstRow prod celwidget"]')
        items = []
        for site in sites:
            item = AmazonItem()
            item['url'] = response.url
            item['price'] = site.xpath('//ul[@class="rsltL"]/li[5]/a/span/text()')
            if item['price']:
                item['price'] = item['price'].extract()[0]
            else:
                item['price'] = "NA"
            items.append(item)
        return items

答案 1 :(得分:0)

您不需要items列表,只需使用yield声明

def parse(self, response):
    sel = Selector(response)

    for site in sel.xpath('//div[@class="fstRow prod celwidget"]'):
        item = AmazonItem()
        item['url'] = response.url
        price = site.xpath('//ul[@class="rsltL"]/li[5]/a/span/text()')
        if price:
            item['price'] = price.extract()[0]
        else:
            item['price'] = "NA"

        yield item

保存到data.csv文件:

scrapy crawl amazon -o data.csv -t csv