您将使用哪种工具或工具集来横向扩展scrapyd,将新机器动态添加到scrapyd集群,并在需要时为每台机器设置N个实例。
是所有实例共享一个共同的作业队列所不必要的。但是,这将是非常棒的。Scrapy-cluster似乎很有希望完成这项工作,但我想要一个基于Scrapyd的解决方案,所以我会听取其他选择和建议。
答案 0 :(得分:1)
我使用API和wrapper为Scrapyd编写了自己的负载均衡器。
from random import shuffle
from scrapyd_api.wrapper import ScrapydAPI
class JobLoadBalancer(object):
@classmethod
def get_less_occupied(
cls,
servers_urls=settings.SERVERS_URLS,
project=settings.DEFAULT_PROJECT,
acceptable=settings.ACCEPTABLE_PENDING):
free_runner = {'num_jobs': 9999, 'client': None}
# shuffle servers optimization
shuffle(servers_urls)
for url in servers_urls:
scrapyd = ScrapydAPI(target=url)
jobs = scrapyd.list_jobs(project)
num_jobs = len(jobs['pending'])
if free_runner['num_jobs'] > num_jobs:
free_runner['num_jobs'] = num_jobs
free_runner['client'] = scrapyd
# Optimization: if found acceptable pending operations in one server stop looking for another one
if free_runner['client'] and free_runner['num_jobs'] <= acceptable:
break
return free_runner['client']
单元测试:
def setUp(self):
super(TestFactory, self).setUp()
# Make sure this servers are running
settings.SERVERS_URLS = [
'http://localhost:6800',
'http://localhost:6900'
]
self.project = 'dummy'
self.spider = 'dummy_spider'
self.acceptable = 0
def test_get_less_occupied(self):
# add new dummy jobs to first server so that choose the second one
scrapyd = ScrapydAPI(target=settings.SERVERS_URLS[0])
scrapyd.schedule(project=self.project, spider=self.spider)
scrapyd.schedule(project=self.project, spider=self.spider)
second_server_url = settings.SERVERS_URLS[1]
scrapyd = JobLoadBalancer.get_less_occupied(
servers_urls=settings.SERVERS_URLS,
project=self.project,
acceptable=self.acceptable)
self.assertEqual(scrapyd.target, second_server_url)
此代码针对一年多前编写的旧版scrapyd。