加速或矢量化pandas应用功能 - 需要函数的条件应用

时间:2018-04-10 06:41:46

标签: python pandas scipy python-multiprocessing data-science

我想将一个函数逐行应用于一个如下所示的数据框:

name  value 

'foo' 2
'bar' 4
'bar' 3
'foo' 1
  .   .
  .   .
  .   .
'bar' 8

速度对我来说非常重要,因为我在多个90GB数据集上运行,所以我一直试图对以下操作进行矢量化以便在df.apply中使用:

以'name'为条件,我想将'value'插入一个单独的函数,对结果执行一些算术运算,并写入新列'output'。像,

funcs = {'foo': <FunctionObject>, 'bar': <FunctionObject>}

def masterFunc(row):
    correctFunction = funcs[row['name']]
    row['output'] = correctFunction(row['value']) + 3*row['value']

df.apply(masterFunc, axis=1).

在我真正的问题中,我有32种不同的功能,可以根据'名称'应用于'价值'。每个单独的函数(fooFunc,barFunc,zooFunc等)都已经过矢量化;它们是像这样构建的scipy.interp1d函数:

separateFunc = scipy.interpolate.interp1d(x-coords=[2, 3, 4], y-coords=[3, 5, 7])
#separateFunc is now a math function, y=2x-1. use case:
y = separateFunc(3.5) # y == 6

但是,我不确定如何对masterFunc本身进行矢量化。看起来选择“拉出”哪个函数应用于'value'是非常昂贵的,因为它需要在每次迭代时使用内存访问(使用我当前在hashtables中存储函数的方法)。然而,替代方案似乎只是一堆if-then语句,似乎也无法进行矢量化。我怎样才能加快速度呢?

实际代码,为简洁起见删除了重复部分:

interpolationFunctions = {}
#the 'interpolate.emissionsFunctions' are a separate function which does some scipy stuff
interpolationFunctions[2] = interpolate.emissionsFunctions('./roadtype_2_curve.csv')
interpolationFunctions[3] = interpolate.emissionsFunctions('./roadtype_3_curve.csv')

def compute_pollutants(row):
    funcs = interpolationFunctions[row['roadtype']]
    speed = row['speed']
    length = row['length']
    row['CO2-Atm'] = funcs['CO2-Atm'](speed)*length*speed*0.00310686368
    row['CO2-Eq'] = funcs['CO2-Eq'](speed)*length*speed*0.00310686368
    return row

1 个答案:

答案 0 :(得分:1)

尝试创建一个可重现的示例,该示例应该针对您的问题进行推广。您可以运行具有不同行大小的代码来比较不同方法之间的结果,将这些方法之一扩展到使用cython或多处理以获得更快的速度也应该不难。您提到您的数据非常大,我没有测试过每种方法的内存使用情况,所以值得在自己的机器上进行测试。

import numpy as np
import pandas as pd
import time as t

# Example Functions
def foo(x):
    return x + x

def bar(x):
    return x * x

# Example Functions for multiple columns
def foo2(x, y):
    return x + y

def bar2(x, y):
    return x * y

# Create function dictionary
funcs = {'foo': foo, 'bar': bar}
funcs2 = {'foo': foo2, 'bar': bar2}

n_rows = 1000000
# Generate Sample Data
names = np.random.choice(list(funcs.keys()), size=n_rows)
values = np.random.normal(100, 20, size=n_rows)
df = pd.DataFrame()
df['name'] = names
df['value'] = values

# Create copy for comparison using different methods
df_copy = df.copy()

# Modified original master function
def masterFunc(row, functs):
    correctFunction = funcs[row['name']]
    return correctFunction(row['value']) + 3*row['value']

t1 = t.time()
df['output'] = df.apply(lambda x: masterFunc(x, funcs), axis=1)
t2 = t.time()
print("Time for all rows/functions: ", t2 - t1)


# For Functions that Can be vectorized using numpy
t3 = t.time()
output_dataframe_list = []
for func_name, func in funcs.items():
    df_subset = df_copy.loc[df_copy['name'] == func_name,:]
    df_subset['output'] = func(df_subset['value'].values) + 3 * df_subset['value'].values
    output_dataframe_list.append(df_subset)

output_df = pd.concat(output_dataframe_list)

t4 = t.time()
print("Time for all rows/functions: ", t4 - t3)


# Using a for loop over numpy array of values is still faster than dataframe apply using
t5 = t.time()
output_dataframe_list2 = []
for func_name, func in funcs2.items():
    df_subset = df_copy.loc[df_copy['name'] == func_name,:]
    col1_values = df_subset['value'].values
    outputs = np.zeros(len(col1_values))
    for i, v in enumerate(col1_values):
        outputs[i] = func(col1_values[i], col1_values[i]) + 3 * col1_values[i]

    df_subset['output'] = np.array(outputs)
    output_dataframe_list2.append(df_subset)

output_df2 = pd.concat(output_dataframe_list2)

t6 = t.time()
print("Time for all rows/functions: ", t6 - t5)