Scrapy传递响应,缺少一个位置参数

时间:2017-05-25 02:43:52

标签: python scrapy arguments web-crawler

来自php的python的新手。我想用Scrapy刮掉一些网站,并且已经完成了教程和简单的脚本。现在写下真正的交易就出现了这个错误:

  

追踪(最近一次呼叫最后一次):

     

文件" C:\ Users \ Naltroc \ Miniconda3 \ lib \ site-packages \ twisted \ internet \ defer.py",   第67行,在_runCallbacks中       current.result = callback(current.result,* args,** kw)

     

文件" C:\ Users \ Naltroc \ Documents \ Python Scripts \ tutorial \ tutorial \ spiders \ quotes_spider.py",第52行,解析       self.dispatchersite

     

TypeError:thesaurus()缺少1个必需的位置参数:'响应'

当调用shell命令scrapy crawl words时,Scrapy会自动实例化对象。

据我所知,self是任何类方法的第一个参数。调用类方法时,不要将self作为参数传递,而是将变量发送给它。

首先这叫做:

# Scrapy automatically provides `response` to `parse()` when coming from `start_requests()`
def parse(self, response):
        site = response.meta['site']
        #same as "site = thesaurus"
        self.dispatcher[site](response)
        #same as "self.dispatcher['thesaurus'](response)

然后

def thesaurus(self, response):
        filename = 'thesaurus.txt'
        words = ''
        ul = response.css('.relevancy-block ul')
        for idx, u in enumerate(ul):
            if idx == 1: 
                break;
            words = u.css('.text::text').extract()

        self.save_words(filename, words)

在php中,这应该与调用$this->thesaurus($response)相同。 parse显然是将response作为变量发送,但python表示它已丢失。 它去了哪里?

完整代码:

import scrapy

class WordSpider(scrapy.Spider):
    def __init__(self, keyword = 'apprehensive'):
        self.k = keyword
    name = "words"
    # Utilities
    def make_csv(self, words):
        csv = ''
        for word in words:
            csv += word + ','
        return csv

    def save_words(self, words, fp):
        with ofpen(fp, 'w') as f:
            f.seek(0)
            f.truncate()
            csv = self.make_csv(words)
            f.write(csv)

    # site specific parsers
    def thesaurus(self, response):
        filename = 'thesaurus.txt'
        words = ''
        print("in func self is defined as ", self)
        ul = response.css('.relevancy-block ul')
        for idx, u in enumerate(ul):
            if idx == 1:
                break;
            words = u.css('.text::text').extract()
            print("words is ", words)

        self.save_words(filename, words)

    def oxford(self):
        filename = 'oxford.txt'
        words = ''

    def collins(self):
        filename = 'collins.txt'
        words = ''

    # site/function mapping
    dispatcher = {
        'thesaurus': thesaurus,
        'oxford': oxford,
        'collins': collins,
    }

    def parse(self, response):
        site = response.meta['site']
        self.dispatcher[site](response)

    def start_requests(self):
        urls = {
            'thesaurus': 'http://www.thesaurus.com/browse/%s?s=t' % self.k,
            #'collins': 'https://www.collinsdictionary.com/dictionary/english-thesaurus/%s' % self.k,
            #'oxford': 'https://en.oxforddictionaries.com/thesaurus/%s' % self.k,
        }

        for site, url in urls.items():
            print(site, url)
            yield scrapy.Request(url, meta={'site': site}, callback=self.parse)

1 个答案:

答案 0 :(得分:2)

你的代码周围有很多微小的错误。我冒昧地清理它以遵循常见的python / scrapy习语:)

import logging
import scrapy


# Utilities
# should probably use csv module here or `scrapy crawl -o` flag instead
def make_csv(words):
    csv = ''
    for word in words:
        csv += word + ','
    return csv


def save_words(words, fp):
    with open(fp, 'w') as f:
        f.seek(0)
        f.truncate()
        csv = make_csv(words)
        f.write(csv)


class WordSpider(scrapy.Spider):
    name = "words"

    def __init__(self, keyword='apprehensive', **kwargs):
        super(WordSpider, self).__init__(**kwargs)
        self.k = keyword

    def start_requests(self):
        urls = {
            'thesaurus': 'http://www.thesaurus.com/browse/%s?s=t' % self.k,
            # 'collins': 'https://www.collinsdictionary.com/dictionary/english-thesaurus/%s' % self.k,
            # 'oxford': 'https://en.oxforddictionaries.com/thesaurus/%s' % self.k,
        }

        for site, url in urls.items():
            yield scrapy.Request(url, meta={'site': site}, callback=self.parse)

    def parse(self, response):
        parser = getattr(self, response.meta['site'])  # retrieve method by name
        logging.info(f'parsing using: {parser}')
        parser(response)

    # site specific parsers
    def thesaurus(self, response):
        filename = 'thesaurus.txt'
        words = []
        print("in func self is defined as ", self)
        ul = response.css('.relevancy-block ul')
        for idx, u in enumerate(ul):
            if idx == 1:
                break
            words = u.css('.text::text').extract()
            print("words is ", words)
        save_words(filename, words)

    def oxford(self):
        filename = 'oxford.txt'
        words = ''

    def collins(self):
        filename = 'collins.txt'
        words = ''