我正在尝试选择网站上的下一个按钮,并且它有一个右箭头作为链接的文本。当我使用“scrappy shell”查看源代码时,它会将该字符显示为unicode文字“\ u2192”。有了这个,我开发了以下Scrapy CrawlSpider:
# -*- coding: utf-8 -*-
import scrapy
from scrapy.contrib.linkextractors import LinkExtractor
from scrapy.contrib.loader.processor import MapCompose
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy import log, Request
from yelpscraper.items import YelpscraperItem
import re, urlparse
class YelpSpider(CrawlSpider):
name = 'yelp'
allowed_domains = ['yelp.com']
start_urls = ['http://www.yelp.com/search?find_desc=attorney&find_loc=Austin%2C+TX&start=0']
rules = (
Rule(LinkExtractor(allow=r'biz', restrict_xpaths='//*[contains(@class, "natural-search-result")]//a[@class="biz-name"]'), callback='parse_item', follow=True),
Rule(LinkExtractor(allow=r'start', restrict_xpaths=u'//a[contains(@class, "prev-next")]/text()[contains(., "\u2192")]'), follow=True)
)
def parse_item(self, response):
i = YelpscraperItem()
i['phone'] = self.beautify(response.xpath('//*[@class="biz-phone"]/text()').extract())
i['state'] = self.beautify(response.xpath('//span[@itemprop="addressRegion"]/text()').extract())
i['company'] = self.beautify(response.xpath('//h1[contains(@class, "biz-page-title")]/text()').extract())
website = i['website'] = self.beautify(response.xpath('//div[@class="biz-website"]/a/text()').extract())
yield i
记下rules属性中的第二个元组声明,其中包含有问题的unicode字符:
Rule(LinkExtractor(allow=r'start', restrict_xpaths=u'//a[contains(@class, "prev-next")]/text()[contains(., "\u2192")]'), follow=True)
当我尝试运行这个蜘蛛时,我得到了以下的回溯:
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 824, in runUntilCurrent
call.func(*call.args, **call.kw)
File "C:\Python27\lib\site-packages\twisted\internet\task.py", line 607, in _tick
taskObj._oneWorkUnit()
File "C:\Python27\lib\site-packages\twisted\internet\task.py", line 484, in _oneWorkUnit
result = next(self._iterator)
File "C:\Python27\lib\site-packages\scrapy-0.24.4-py2.7.egg\scrapy\utils\defer.py", line 57, in <genexpr>
work = (callable(elem, *args, **named) for elem in iterable)
--- <exception caught here> ---
File "C:\Python27\lib\site-packages\scrapy-0.24.4-py2.7.egg\scrapy\utils\defer.py", line 96, in iter_errback
yield next(it)
File "C:\Python27\lib\site-packages\scrapy-0.24.4-py2.7.egg\scrapy\contrib\spidermiddleware\offsite.py", line 26, in process_spider_output
for x in result:
File "C:\Python27\lib\site-packages\scrapy-0.24.4-py2.7.egg\scrapy\contrib\spidermiddleware\referer.py", line 22, in <genexpr>
return (_set_referer(r) for r in result or ())
File "C:\Python27\lib\site-packages\scrapy-0.24.4-py2.7.egg\scrapy\contrib\spidermiddleware\urllength.py", line 33, in <genexpr>
return (r for r in result or () if _filter(r))
File "C:\Python27\lib\site-packages\scrapy-0.24.4-py2.7.egg\scrapy\contrib\spidermiddleware\depth.py", line 50, in <genexpr>
return (r for r in result or () if _filter(r))
File "C:\Python27\lib\site-packages\scrapy-0.24.4-py2.7.egg\scrapy\contrib\spiders\crawl.py", line 73, in _parse_response
for request_or_item in self._requests_to_follow(response):
File "C:\Python27\lib\site-packages\scrapy-0.24.4-py2.7.egg\scrapy\contrib\spiders\crawl.py", line 52, in _requests_to_follow
links = [l for l in rule.link_extractor.extract_links(response) if l not in seen]
File "C:\Python27\lib\site-packages\scrapy-0.24.4-py2.7.egg\scrapy\contrib\linkextractors\lxmlhtml.py", line 107, in extract_links
links = self._extract_links(doc, response.url, response.encoding, base_url)
File "C:\Python27\lib\site-packages\scrapy-0.24.4-py2.7.egg\scrapy\linkextractor.py", line 94, in _extract_links
return self.link_extractor._extract_links(*args, **kwargs)
File "C:\Python27\lib\site-packages\scrapy-0.24.4-py2.7.egg\scrapy\contrib\linkextractors\lxmlhtml.py", line 50, in _extract_links
for el, attr, attr_val in self._iter_links(selector._root):
File "C:\Python27\lib\site-packages\scrapy-0.24.4-py2.7.egg\scrapy\contrib\linkextractors\lxmlhtml.py", line 38, in _iter_links
for el in document.iter(etree.Element):
exceptions.AttributeError: 'unicode' object has no attribute 'iter'
我想要做的就是选择这个链接,我想不出一种方法来选择它而不使用这个角色。 (它根据页面移动)。无论如何使用它的ASCII代码或unicode以外的东西来选择它;这似乎导致了这个问题?
答案 0 :(得分:0)
根据文档,restrict_xpaths
应该是list
或str
。
您正在传递unicode
字符串。这就是您收到错误的原因。
此外,您不需要查看text()
,检查prev-next
课程就足够了:
rules = (
Rule(LinkExtractor(allow=r'biz', restrict_xpaths='//*[contains(@class, "natural-search-result")]//a[@class="biz-name"]'),
callback='parse_item', follow=True),
Rule(LinkExtractor(allow=r'start', restrict_xpaths='//a[contains(@class, "prev-next")]'),
follow=True)
)
经过测试(爬行时没有错误,它跟着分页)。