使用Dask Distributed作为任务执行程序很有趣。 在Celery中,可以将任务分配给特定工人。如何使用Dask Distributed?
答案 0 :(得分:1)
有2个选项:
Specify workers by name或主机或IP(但仅肯定声明):
dask-worker scheduler_address:8786 --name worker_1
,然后选择以下选项之一:
client.map(func, sequence, workers='worker_1')
client.map(func, sequence, workers=['192.168.1.100', '192.168.1.100:8989', 'alice', 'alice:8989'])
client.submit(f, x, workers='127.0.0.1')
client.submit(f, x, workers='127.0.0.1:55852')
client.submit(f, x, workers=['192.168.1.101', '192.168.1.100'])
future = client.compute(z, workers={z: '127.0.0.1',
x: '192.168.0.1:9999'})
future = client.compute(z, workers={(x, y): ['192.168.1.100', '192.168.1.101:9999']})
使用Resources概念。您可以为工作人员指定可用资源,例如:
dask-worker scheduler:8786 --resources "CAN_PROCESS_QUEUE_ALICE=2"
并指定所需的资源,例如
client.submit(aggregate, processed, resources={'CAN_PROCESS_QUEUE_ALICE': 1})
或
z = some_dask_object.map_parititons(func)
z.compute(resources={tuple(y.__dask_keys__()): {'CAN_PROCESS_QUEUE_ALICE': 1})