Python Scrapy ReactorNotRestartable()

时间:2018-07-05 06:30:20

标签: python scrapy ibm-cloud openwhisk ibm-cloud-functions

我正在尝试在IBM云上使用Scrapy作为功能。我的__main__.py如下:

class AutoscoutListSpider(scrapy.Spider):
    name = "vehicles list"

    def __init__(self, params, *args, **kwargs):
        super(AutoscoutListSpider, self).__init__(*args, **kwargs)
        make = params.get("make", None)
        model = params.get("model", None)
        mileage = params.get("mileage", None)

        init_url = "https://www.autoscout24.be/nl/resultaten?sort=standard&desc=0&ustate=N%2CU&size=20&page=1&cy=B&mmvmd0={0}&mmvmk0={1}&kmto={2}&atype=C&".format(
            model, make, mileage)
        self.start_urls = [init_url]

    def parse(self, response):
        # Get total result on list load
        init_total_results = int(response.css('.cl-filters-summary-counter::text').extract_first().replace('.', ''))
        if init_total_results > 400:
            yield {"message": "There are MORE then 400 results"}
        else:
            yield {"message": "There are LESS then 400 results"}


def main(params):
    process = CrawlerProcess()
    try:
        runner = crawler.CrawlerRunner()
        runner.crawl(AutoscoutListSpider, params)
        d = runner.join()
        d.addBoth(lambda _: reactor.stop())
        reactor.run()
        return {"Success ": main_result}
    except Exception as e:
        return {"Error ": e, "params ": params}

我将其作为IBM函数上传到,很好。

但是问题是我在python consoleinvoke IBM function中运行它时,第一次执行它,但是如果我想第二次执行它,就会出现错误:

{'Error ': ReactorNotRestartable(), 'params ': {'make': '9', 'model': '1624', 'mileage': '2500'}}

它是这样调用的:

IBM

ibmcloud wsk action invoke --result ascrawler --param make 9 --param model 1624 --param mileage 2500

Python控制台

main({"make":"9", "model":"1624", "mileage":"2500"})

在下一个代码中,我尝试增加了多次运行的可能性,但没有成功。

runner = crawler.CrawlerRunner()
runner.crawl(AutoscoutListSpider, params)
d = runner.join()
d.addBoth(lambda _: reactor.stop())
reactor.run()

有什么办法解决吗?

1 个答案:

答案 0 :(得分:0)

您是要使用CrawlerRunner而不是CrawlerProcess吗?

根据文档,应使用CrawlerRunner而不是CrawlerProcess “如果您的应用程序已在使用Twisted,并且您希望在同一反应堆中运行Scrapy。” 对于Python操作,情况并非如此在IBM Cloud Functions中。

main方法更改为以下代码,它可以正常工作。

def main(params):
    process = CrawlerProcess()
    try:
        process.crawl(AutoscoutListSpider, params)
        process.start()
        return {"Success ": params}
    except Exception as e:
        return {"Error ": e, "params ": params}