我在八个节点的Kubernetes集群上运行Dask,清单中指定了一个调度程序副本和八个工作副本。我的代码正在处理80个大小大约相等的文件,我想看看性能如何从一个工作人员扩展到八个工作人员。我正在做类似这样的事情:
client: distributed.client.Client = get_client()
workers = client.scheduler_info()['workers']
worker_ips: List[str] = list(workers.keys())
my_files: List[str] = ["list", "of", "files", "to", "be", "processed", "..."]
# This dictionary maps a worker ip to a uniform subset of my_files
files_per_worker = {
"worker_ip1" : ["list", "to", "..."], # files for worker1 only
"worker_ip2" : ["of", "be"], # files for worker2 only
"worker_ip3" : ["files", "processed"] # files for worker3 only
}
# Send each worker a subset of the work
futures = [client.submit(do_work, subset_of_files, workers=[ip])
for (ip, subset_of_files) in files_per_worker.items()]
# Get results from each node, blocking until completion, and reducing partial results into final version
result = finalize_partial_results([f.result() for f in futures])
结果的简化摘要是:
我本来以为八个(每个物理节点一个工人)是最佳选择,但事实并非如此。我什至用不同大小的不同输入数据集进行了测试。五个节点总是最好的,六个节点跳得很大。
可能是什么原因引起的,如何避免这种性能下降?据我所知,每个worker_ip
代表一个物理节点,因此工作应在所选工作者的子集中统一共享。