Scrapy回调str问题

时间:2015-12-19 14:48:36

标签: python scrapy

我正在尝试使用Scrapy运行一个刮刀,我过去能够使用这个代码做,但现在我得到一个奇怪的错误。

 _rules =(Rule(LinkExtractor(restrict_xpaths=(xpath_str)), follow=True,
          callback='parse_url'),)

 def parse_url(self, response):
     print response.url
     ...

基本上我运行时得到的是:

Traceback (most recent call last):


 File "/usr/lib/pymodules/python2.7/scrapy/utils/defer.py", line 102, in iter_errback
    yield next(it)
  File "/usr/lib/pymodules/python2.7/scrapy/spidermiddlewares/offsite.py", line 28, in process_spider_output
    for x in result:
  File "/usr/lib/pymodules/python2.7/scrapy/spidermiddlewares/referer.py", line 22, in <genexpr>
    return (_set_referer(r) for r in result or ())
  File "/usr/lib/pymodules/python2.7/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/usr/lib/pymodules/python2.7/scrapy/spidermiddlewares/depth.py", line 54, in <genexpr>
    return (r for r in result or () if _filter(r))
  File "/usr/lib/pymodules/python2.7/scrapy/spiders/crawl.py", line 67, in _parse_response
    cb_res = callback(response, **cb_kwargs) or ()
TypeError: 'str' object is not callable

为什么会这样?我在另一个刮刀中有一个非常相似的代码吗?!

这是完整的代码

from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from ..model import Properties


class TestScraper(CrawlSpider):
      name = "test"
      start_urls = [Properties.start_url]

     _rules =( Rule(LinkExtractor(restrict_xpaths=(Properites.xpath)), follow=True, callback='parse_url'), )

        def parse_url(self, response):
            print response.url

1 个答案:

答案 0 :(得分:0)

callback='parse_url'更改为callback=self.parse_url