这是我用来在pandas.DataFrame对象的行上parrellize一个apply函数的代码:
from multiprocessing import cpu_count, Pool
from functools import partial
def parallel_applymap_df(df: DataFrame, func, num_cores=cpu_count(),**kargs):
partitions = np.linspace(0, len(df), num_cores + 1, dtype=np.int64)
df_split = [df.iloc[partitions[i]:partitions[i + 1]] for i in range(num_cores)]
pool = Pool(num_cores)
series = pd.concat(pool.map(partial(apply_wrapper, func=func, **kargs), df_split))
pool.close()
pool.join()
return series
它适用于200 000行的子样本,但是当我尝试完整的200 000个示例时,我收到以下错误消息:
~/anaconda3/lib/python3.6/site-packages/multiprocess/connection.py in _send_bytes(self, buf)
394 n = len(buf)
395 # For wire compatibility with 3.2 and lower
—> 396 header = struct.pack("!i", n)
397 if n > 16384:
398 # The payload is large so Nagle's algorithm won't be triggered
error: 'i' format requires -2147483648 <= number <= 2147483647
由该行生成:
series = pd.concat(pool.map(partial(apply_wrapper, func=func, **kargs), df_split))
这很奇怪,因为我用来并行化在pandas中没有矢量化的操作(如Series.dt.time)的稍微不同的版本在相同的行数上工作。这个版本的例子有效:
def parallel_map_df(df: DataFrame, func, num_cores=cpu_count()):
partitions = np.linspace(0, len(df), num_cores + 1, dtype=np.int64)
df_split = [df.iloc[partitions[i]:partitions[i + 1]] for i in range(num_cores)]
pool = Pool(num_cores)
df = pd.concat(pool.map(func, df_split))
pool.close()
pool.join()
return df
答案 0 :(得分:0)
错误本身来自多处理在池中的不同工作程序之间建立连接的事实。要向该工作者发送数据或从该工作者发送数据,必须以字节为单位发送数据。第一步是为将发送给工作人员的消息创建一个标头。此标头包含作为整数的缓冲区长度。但是,如果缓冲区的长度大于可以用整数表示的长度,则代码将失败并产生您显示的错误。
我们缺少重现您的问题所需的数据和相当多的代码,因此我将提供一个最小的工作示例:
import numpy
import pandas
import random
from typing import List
from multiprocessing import cpu_count, Pool
def parallel_applymap_df(
input_dataframe: pandas.DataFrame, func, num_cores: int = cpu_count(), **kwargs
) -> pandas.DataFrame:
# Create splits in the dataframe of equal size (one split will be processed by one core)
partitions = numpy.linspace(
0, len(input_dataframe), num_cores + 1, dtype=numpy.int64
)
splits = [
input_dataframe.iloc[partitions[i] : partitions[i + 1]]
for i in range(num_cores)
]
# Just for debugging, add metadata to each split
for index, split in enumerate(splits):
split.attrs["split_index"] = index
# Create a pool of workers
with Pool(num_cores) as pool:
# Map the splits in the dataframe to workers in the pool
result: List[pandas.DataFrame] = pool.map(func, splits, **kwargs)
# Combine all results of the workers into a new dataframe
return pandas.concat(result)
if __name__ == "__main__":
# Create some test data
df = pandas.DataFrame([{"A": random.randint(0, 100)} for _ in range(200000000)])
def worker(df: pandas.DataFrame) -> pandas.DataFrame:
# Print the length of the dataframe being processed (for debugging)
print("Working on split #", df.attrs["split_index"], "Length:", len(df))
# Do some arbitrary stuff to the split of the dataframe
df["B"] = df.apply(lambda row: f"test_{row['A']}", axis=1)
# Return the result
return df
# Create a new dataframe by applying the worker function to the dataframe in parallel
df = parallel_applymap_df(df, worker)
print(df)
请注意,这可能不是最快的方法。如需更快的替代方案,请查看 swifter
或 dask
。