我正在尝试找出使用大型映射映射dask系列的最佳方法。直截了当的series.map(large_mapping)
问题UserWarning: Large object of size <X> MB detected in task graph
并建议使用client.scatter
和client.submit
,但后者无法解决问题,实际上速度要慢得多。在broadcast=True
中尝试client.scatter
也无济于事。
import argparse
import distributed
import dask.dataframe as dd
import numpy as np
import pandas as pd
def compute(s_size, m_size, npartitions, scatter, broadcast, missing_percent=0.1, seed=1):
np.random.seed(seed)
mapping = dict(zip(np.arange(m_size), np.random.random(size=m_size)))
ps = pd.Series(np.random.randint((1 + missing_percent) * m_size, size=s_size))
ds = dd.from_pandas(ps, npartitions=npartitions)
if scatter:
mapping_futures = client.scatter(mapping, broadcast=broadcast)
future = client.submit(ds.map, mapping_futures)
return future.result()
else:
return ds.map(mapping)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('-s', default=200000, type=int, help='series size')
parser.add_argument('-m', default=50000, type=int, help='mapping size')
parser.add_argument('-p', default=10, type=int, help='partitions number')
parser.add_argument('--scatter', action='store_true', help='Scatter mapping')
parser.add_argument('--broadcast', action='store_true', help='Broadcast mapping')
args = parser.parse_args()
client = distributed.Client()
ds = compute(args.s, args.m, args.p, args.scatter, args.broadcast)
print(ds.compute().describe())
答案 0 :(得分:2)
你的问题在这里
In [4]: mapping = dict(zip(np.arange(50000), np.random.random(size=50000)))
In [5]: import pickle
In [6]: %time len(pickle.dumps(mapping))
CPU times: user 2.24 s, sys: 18.6 ms, total: 2.26 s
Wall time: 2.25 s
Out[6]: 6268809
所以mapping
很大且没有分区 - 在这种情况下,分散操作会给你提供问题。
考虑替代方案
def make_mapping():
return dict(zip(np.arange(50000), np.random.random(size=50000)))
mapping = client.submit(make_mapping) # ships the function, not the data
# and requires no serialisation
future = client.submit(ds.map, mapping)
这不会显示警告。但是,我在这里使用字典进行映射似乎很奇怪,一系列直接数组似乎更好地编码了数据的本质。