为什么dask不并行执行

时间:2019-06-10 09:10:03

标签: python dask dask-distributed dask-delayed

有人可以指出我在执行dask之后做错了什么,因为它似乎没有使用多核。

[更新为可复制代码]

使用快捷键的代码

bookingID = np.arange(1,10000)
book_data = pd.DataFrame(np.random.rand(1000))
def calculate_feature_stats(bookingID):
    curr_book_data = book_data
    row = list()
    row.append(bookingID)
    row.append(curr_book_data.min())
    row.append(curr_book_data.max())
    row.append(curr_book_data.std())
    row.append(curr_book_data.mean())

    return row


calculate_feature_stats = dask.delayed(calculate_feature_stats)


rows = []


for bookid in bookingID.tolist():
    row = calculate_feature_stats(bookid)
    rows.append(row)

  start = time.time()
  rows = dask.persist(*rows)
  end = time.time()
  print(end - start)  # Execution time = 16s in my machine

具有正常实现方式的代码不带代码

bookingID = np.arange(1,10000)
book_data = pd.DataFrame(np.random.rand(1000))

def calculate_feature_stats_normal(bookingID):
    curr_book_data = book_data
    row = list()
    row.append(bookingID)
    row.append(curr_book_data.min())
    row.append(curr_book_data.max())
    row.append(curr_book_data.std())
    row.append(curr_book_data.mean())
   return row


rows = []
start = time.time()
for bookid in bookingID.tolist():
    row = calculate_feature_stats_normal(bookid)
    rows.append(row)
end = time.time()
print(end - start)  # Execution time = 4s in my machine

那么,如果没有更快的速度,那怎么可能呢?

1 个答案:

答案 0 :(得分:1)

答案

扩展评论。您应该考虑使用dask大约需要1毫秒的开销(请参阅doc),因此,如果您的计算比这短,那么dask就不值得了。

转到您的特定问题,我可以想到两种现实情况: 1.一个大数据框,其中有一列称为bookingID,另一列为value 2.每个预订ID的文件都不同

在第二种情况下,您可以从此answer开始播放,而在第一种情况下,您可以按照以下步骤操作:

import dask.dataframe as dd
import numpy as np
import pandas as pd



# create dummy df
df = []
for i in range(10_000):
    df.append(pd.DataFrame({"id":i,
                            "value":np.random.rand(1000)}))
df = pd.concat(df, ignore_index=True)
df = df.sample(frac=1).reset_index(drop=True)
df.to_parquet("df.parq")

熊猫

%%time
df = pd.read_parquet("df.parq")
out = df.groupby("id").agg({"value":{"min", "max", "std", "mean"}})
out.columns = [col[1] for col in out.columns]
out = out.reset_index(drop=True)

CPU times: user 1.65 s, sys: 316 ms, total: 1.96 s
Wall time: 1.08 s

黄昏

%%time
df = dd.read_parquet("df.parq")
out = df.groupby("id").agg({"value":["min", "max", "std", "mean"]}).compute()
out.columns = [col[1] for col in out.columns]
out = out.reset_index(drop=True)

CPU times: user 4.94 s, sys: 427 ms, total: 5.36 s
Wall time: 3.94 s

最终想法

在这种情况下,如果df不能容纳在内存中,那么快就有意义了。