分发如何快速进行?

时间:2019-02-28 11:39:07

标签: python pandas numpy dask dask-distributed

我有一个数据框:

import numpy as np
import pandas as pd
import dask.dataframe as dd
a = {'b':['cat','bat','cat','cat','bat','No Data','bat','No Data'],
     'c':['str1','str2','str3', 'str4','str5','str6','str7', 'str8']
    }
df11 = pd.DataFrame(a,index=['x1','x2','x3','x4','x5','x6','x7','x8'])

我尝试使用lamda函数提取基于行和正常数据帧的每个元素,如下所示:

def elementsearch(term1, term2):
    print(term1, term2 )
    return term1

df11.apply(lambda x: elementsearch(x.b,x.c), axis =1)

这很好。但是当我使用dask库时:

ddf = dd.from_pandas(df11,npartitions=8)
ddf.map_partitions(lambda df : df.apply(lambda x : elementsearch((x.b,x.c),axis=1)))

它抛出了如下错误:

ValueError: Metadata inference failed in `lambda`.

You have supplied a custom function and Dask is unable to 
determine the type of output that that function returns. 

To resolve this please provide a meta= keyword.
The docstring of the Dask function you ran should have more information.

Original error is below:
------------------------
AttributeError("'Series' object has no attribute 'c'", 'occurred at index b')

Traceback:
---------
  File "/opt/conda/lib/python3.6/site-packages/dask/dataframe/utils.py", line 137, in raise_on_meta_error
    yield
  File "/opt/conda/lib/python3.6/site-packages/dask/dataframe/core.py", line 3477, in _emulate
    return func(*_extract_meta(args, True), **_extract_meta(kwargs, True))
  File "<ipython-input-198-8857a48ba1e5>", line 2, in <lambda>
    ddf.map_partitions(lambda df : df.apply(lambda x : elementsearch((x.b,x.c),axis=1)))
  File "/opt/conda/lib/python3.6/site-packages/pandas/core/frame.py", line 6014, in apply
    return op.get_result()
  File "/opt/conda/lib/python3.6/site-packages/pandas/core/apply.py", line 318, in get_result
    return super(FrameRowApply, self).get_result()
  File "/opt/conda/lib/python3.6/site-packages/pandas/core/apply.py", line 142, in get_result
    return self.apply_standard()
  File "/opt/conda/lib/python3.6/site-packages/pandas/core/apply.py", line 248, in apply_standard
    self.apply_series_generator()
  File "/opt/conda/lib/python3.6/site-packages/pandas/core/apply.py", line 277, in apply_series_generator
    results[i] = self.f(v)
  File "<ipython-input-198-8857a48ba1e5>", line 2, in <lambda>
    ddf.map_partitions(lambda df : df.apply(lambda x : elementsearch((x.b,x.c),axis=1)))
  File "/opt/conda/lib/python3.6/site-packages/pandas/core/generic.py", line 4376, in __getattr__
    return object.__getattribute__(self, name)

我已经在堆栈过低中提到了这个问题,但是它对我没有用:  On Dask DataFrame.apply(), receiving n rows of value 1 before actual rows processed

我该如何解决?

1 个答案:

答案 0 :(得分:0)

我建议像对Pandas代码一样,在dask数据帧上使用apply方法

df11.apply(lambda x: elementsearch(x.b,x.c), axis =1)