是否有可能在scrapy中动态创建管道?

时间:2016-09-18 19:23:49

标签: python scrapy

我有一个将数据发布到webhook的管道。我想将它重新用于另一只蜘蛛。我的管道是这样的:

class Poster(object):
    def process_item(self, item, spider):
        item_attrs = {
          "url": item['url'], "price": item['price'],
          "description": item['description'], "title": item['title']
        }

        data = json.dumps({"events": [item_attrs]})

        poster = requests.post(
            "http://localhost:3000/users/1/web_requests/69/supersecretstring",
            data = data, headers = {'content-type': 'application/json'}
        )

        if poster.status_code != 200:
            raise DropItem("error posting event %s code=%s" % (item, poster.status_code))

        return item

问题是,在另一个蜘蛛中,我需要发布到另一个URL,并可能使用不同的属性。是否可以指定而不是:

class Spider(scrapy.Spider):
    name = "products"
    start_urls = (
        'some_url',
    )
    custom_settings = {
        'ITEM_PIPELINES': {
           'spider.pipelines.Poster': 300,
        },
    }

类似的东西:

    custom_settings = {
        'ITEM_PIPELINES': {
           spider.pipelines.Poster(some_other_url, some_attributes): 300,
        },
    }

我知道在创建蜘蛛时我需要的URL,以及我要提取的字段。

1 个答案:

答案 0 :(得分:3)

执行此操作的方法很少,但最简单的方法是在管道中使用open_spider(self, spider)

用例示例:

scrapy crawl myspider -a pipeline_count=123

然后设置您的管道以阅读:

class MyPipeline(object):
    count = None

    def open_spider(self, spider):
        count = getattr(spider, 'pipeline_count')
        self.count = int(count)

    # or as starrify pointed out in the comment below
    # access it directly in process_item
    def process_item(self, item, spider):
        count = getattr(spider, 'pipeline_count')
        item['count'] = count
        return item
    <...>