Scrapy - 声明了非ascii字符,但未声明编码

时间:2014-03-04 23:04:50

标签: python encoding scrapy

我试图从本网站上删除一些基本数据作为练习,以了解更多有关scrapy的信息,并作为大学项目的概念证明: http://steamdb.info/sales/

当我使用scrapy shell时,我能够使用以下XPath获取我想要的信息:

sel.xpath(‘//tbody/tr[1]/td[2]/a/text()’).extract()

应该在结构中返回表格第一行游戏的标题:

<tbody>
     <tr>
          <td></td>
          <td><a>stuff I want here</a></td>
...

确实如此,在shell中。

然而,当我试图将它放入蜘蛛(steam.py)时:

1 from scrapy.spider import BaseSpider
2 from scrapy.selector import HtmlXPathSelector
3 from steam_crawler.items import SteamItem
4 from scrapy.selector import Selector
5 
6 class SteamSpider(BaseSpider):
7     name = "steam"
8     allowed_domains = ["http://steamdb.info/"]
9     start_urls = ['http://steamdb.info/sales/?displayOnly=all&category=0&cc=uk']
10     def parse(self, response):
11         sel = Selector(response)
12         sites = sel.xpath("//tbody")
13         items = []
14         count = 1
15         for site in sites:
16             item = SteamItem()
17             item ['title'] = sel.xpath('//tr['+ str(count) +']/td[2]/a/text()').extract().encode('utf-8')
18             item ['price'] = sel.xpath('//tr['+ str(count) +']/td[@class=“price-final”]/text()').extract().encode('utf-8')
19             items.append(item)
20             count = count + 1
21         return items

我收到以下错误:

    ricks-mbp:steam_crawler someuser$ scrapy crawl steam -o items.csv -t csv
Traceback (most recent call last):
  File "/usr/local/bin/scrapy", line 5, in <module>
    pkg_resources.run_script('Scrapy==0.20.0', 'scrapy')
  File "build/bdist.macosx-10.9-intel/egg/pkg_resources.py", line 492, in run_script

  File "build/bdist.macosx-10.9-intel/egg/pkg_resources.py", line 1350, in run_script
    for name in eagers:
  File "/Library/Python/2.7/site-packages/Scrapy-0.20.0-py2.7.egg/EGG-INFO/scripts/scrapy", line 4, in <module>
    execute()
  File "/Library/Python/2.7/site-packages/Scrapy-0.20.0-py2.7.egg/scrapy/cmdline.py", line 143, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "/Library/Python/2.7/site-packages/Scrapy-0.20.0-py2.7.egg/scrapy/cmdline.py", line 89, in _run_print_help
    func(*a, **kw)
  File "/Library/Python/2.7/site-packages/Scrapy-0.20.0-py2.7.egg/scrapy/cmdline.py", line 150, in _run_command
    cmd.run(args, opts)
  File "/Library/Python/2.7/site-packages/Scrapy-0.20.0-py2.7.egg/scrapy/commands/crawl.py", line 47, in run
    crawler = self.crawler_process.create_crawler()
  File "/Library/Python/2.7/site-packages/Scrapy-0.20.0-py2.7.egg/scrapy/crawler.py", line 87, in create_crawler
    self.crawlers[name] = Crawler(self.settings)
  File "/Library/Python/2.7/site-packages/Scrapy-0.20.0-py2.7.egg/scrapy/crawler.py", line 25, in __init__
    self.spiders = spman_cls.from_crawler(self)
  File "/Library/Python/2.7/site-packages/Scrapy-0.20.0-py2.7.egg/scrapy/spidermanager.py", line 35, in from_crawler
    sm = cls.from_settings(crawler.settings)
  File "/Library/Python/2.7/site-packages/Scrapy-0.20.0-py2.7.egg/scrapy/spidermanager.py", line 31, in from_settings
    return cls(settings.getlist('SPIDER_MODULES'))
  File "/Library/Python/2.7/site-packages/Scrapy-0.20.0-py2.7.egg/scrapy/spidermanager.py", line 22, in __init__
    for module in walk_modules(name):
  File "/Library/Python/2.7/site-packages/Scrapy-0.20.0-py2.7.egg/scrapy/utils/misc.py", line 68, in walk_modules
    submod = import_module(fullpath)
  File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/importlib/__init__.py", line 37, in import_module
    __import__(name)
  File "/xxx/scrape/steam/steam_crawler/spiders/steam.py", line 18
SyntaxError: Non-ASCII character '\xe2' in file /xxx/scrape/steam/steam_crawler/spiders/steam.py on line 18, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details

我有一种感觉,我需要做的就是以某种方式告诉scrapy人物将遵循utf-8而不是ascii - 因为有等等。但是从我可以收集到的,它应该从中收集这些信息页面的头部刮,在本网站的情况下是:

<meta charset="utf-8">

让我感到困惑!任何不是scrapy文档的见解/阅读我也会感兴趣!

1 个答案:

答案 0 :(得分:3)

好像您使用的是而不是双引号"

顺便说一句,循环所有表行的更好的做法是:

for tr in sel.xpath("//tr"):
    item = SteamItem()
    item ['title'] = tr.xpath('td[2]/a/text()').extract()
    item ['price'] = tr.xpath('td[@class="price-final"]/text()').extract()
    yield item