Python scrapy ReactorNotRestartable替代品

时间:2016-09-11 08:52:33

标签: python flask scrapy reactor twisted.internet

我一直在尝试使用具有以下功能的Scrapy在Python中创建应用程序:

  • rest api (我曾使用过烧瓶)听取所有抓取/报废请求并在抓取后返回响应。(抓取部分足够短) ,所以连接可以保持活着,直到爬行完成。)

我可以使用以下代码执行此操作:

items = []
def add_item(item):
    items.append(item)

# set up crawler
crawler = Crawler(SpiderClass,settings=get_project_settings())
crawler.signals.connect(add_item, signal=signals.item_passed)

# This is added to make the reactor stop, if I don't use this, the code stucks at reactor.run() line.
crawler.signals.connect(reactor.stop, signal=signals.spider_closed) #@UndefinedVariable 
crawler.crawl(requestParams=requestParams)
# start crawling 
reactor.run() #@UndefinedVariable
return str(items)

现在我遇到的问题是在停止反应堆之后(这对我来说似乎是必要的,因为我不想坚持reactor.run())。第一次请求后我无法接受进一步的请求。第一个请求完成后,我收到以下错误:

Traceback (most recent call last):
  File "c:\python27\lib\site-packages\flask\app.py", line 1988, in wsgi_app
    response = self.full_dispatch_request()
  File "c:\python27\lib\site-packages\flask\app.py", line 1641, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "c:\python27\lib\site-packages\flask\app.py", line 1544, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "c:\python27\lib\site-packages\flask\app.py", line 1639, in full_dispatch_request
    rv = self.dispatch_request()
  File "c:\python27\lib\site-packages\flask\app.py", line 1625, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "F:\my_workspace\jobvite\jobvite\com\jobvite\web\RequestListener.py", line 38, in submitForm
    reactor.run() #@UndefinedVariable
  File "c:\python27\lib\site-packages\twisted\internet\base.py", line 1193, in run
    self.startRunning(installSignalHandlers=installSignalHandlers)
  File "c:\python27\lib\site-packages\twisted\internet\base.py", line 1173, in startRunning
    ReactorBase.startRunning(self)
  File "c:\python27\lib\site-packages\twisted\internet\base.py", line 684, in startRunning
    raise error.ReactorNotRestartable()
ReactorNotRestartable

这很明显,因为我们无法重启反应堆。

所以我的问题是:

1)我如何为下一次抓取请求提供支持?

2)有没有办法在reactor.run()之后移动到下一行而不停止它?

2 个答案:

答案 0 :(得分:1)

我建议您使用像Rq这样的队列系统(为简单起见,但很少有其他人) 你可以有一个抓取功能:

from twisted.internet import reactor
import scrapy
from scrapy.crawler import CrawlerRunner
from scrapy.utils.log import configure_logging
from scrapy.utils.project import get_project_settings

from spiders import MySpider

def runCrawler(url, keys, mode, outside, uniqueid): 

    runner = CrawlerRunner( get_project_settings() )

    d = runner.crawl( MySpider, url=url, param1=value1, ... )

    d.addBoth(lambda _: reactor.stop())
    reactor.run()

然后在主代码中,使用Rq队列来收集爬虫执行:

# other imports
pool = redis.ConnectionPool( host=REDIS_HOST, port=REDIS_PORT, db=your_redis_db_number)
redis_conn =redis.Redis(connection_pool=pool)  

q = Queue('parse', connection=redis_conn)

# urlSet is a list of http:// or https:// like url's
for url in urlSet:
    job = q.enqueue(runCrawler, url, param1, ... , timeout=600 )

不要忘记启动rq工作进程,使用相同的队列名称(此处为 parse )。例如,在终端会话中执行:

rq worker parse

答案 1 :(得分:1)

以下是您问题的简单解决方案

from flask import Flask
import threading
import subprocess
import sys
app = Flask(__name__) 

class myThread (threading.Thread):
    def __init__(self,target):
        threading.Thread.__init__(self)
        self.target = target
    def run(self):
        start_crawl()

def start_crawl():
    pid = subprocess.Popen([sys.executable, "start_request.py"])
    return


@app.route("/crawler/start") 
def start_req():
    print ":request"
    threadObj = myThread("run_crawler")
    threadObj.start()
    return "Your crawler is in running state" 
if (__name__ == "__main__"): 
    app.run(port = 5000)

在上面的解决方案中,我假设您可以使用shell /命令行上的命令start_request.py文件从命令行启动爬虫。

现在我们正在做的是在python中使用线程为每个传入请求启动一个新线程。 现在,您可以轻松地为每次点击并行运行您的爬虫实例。 只需使用threading.activeCount()

控制线程数