我有一个pandas DataFrame,df_test
。它包含一个大小的列'表示以字节为单位的大小。我使用以下代码计算了KB,MB和GB:
df_test = pd.DataFrame([
{'dir': '/Users/uname1', 'size': 994933},
{'dir': '/Users/uname2', 'size': 109338711},
])
df_test['size_kb'] = df_test['size'].astype(int).apply(lambda x: locale.format("%.1f", x / 1024.0, grouping=True) + ' KB')
df_test['size_mb'] = df_test['size'].astype(int).apply(lambda x: locale.format("%.1f", x / 1024.0 ** 2, grouping=True) + ' MB')
df_test['size_gb'] = df_test['size'].astype(int).apply(lambda x: locale.format("%.1f", x / 1024.0 ** 3, grouping=True) + ' GB')
df_test
dir size size_kb size_mb size_gb
0 /Users/uname1 994933 971.6 KB 0.9 MB 0.0 GB
1 /Users/uname2 109338711 106,776.1 KB 104.3 MB 0.1 GB
[2 rows x 5 columns]
我已经运行了超过120,000行,每列大约需要2.97秒* 3 = ~9秒根据%timeit。
无论如何我可以加快速度吗?例如,我可以不应该一次从应用和运行3次返回一列,我可以在一次传递中返回所有三列以插回原始数据帧吗?
我发现的其他问题都想要获取多个值并返回单个值。我想获取一个值并返回多列。
答案 0 :(得分:51)
这是一个老问题,但为了完整起见,您可以从包含新数据的应用函数返回一个Series,从而无需迭代三次。将axis=1
传递给apply函数会将函数sizes
应用于数据帧的每一行,并返回一系列以添加到新的数据帧。此系列包含新值以及原始数据。
def sizes(s):
s['size_kb'] = locale.format("%.1f", s['size'] / 1024.0, grouping=True) + ' KB'
s['size_mb'] = locale.format("%.1f", s['size'] / 1024.0 ** 2, grouping=True) + ' MB'
s['size_gb'] = locale.format("%.1f", s['size'] / 1024.0 ** 3, grouping=True) + ' GB'
return s
df_test = df_test.append(rows_list)
df_test = df_test.apply(sizes, axis=1)
答案 1 :(得分:20)
使用apply和zip将比Series方式快3倍。
def sizes(s):
return locale.format("%.1f", s / 1024.0, grouping=True) + ' KB', \
locale.format("%.1f", s / 1024.0 ** 2, grouping=True) + ' MB', \
locale.format("%.1f", s / 1024.0 ** 3, grouping=True) + ' GB'
df_test['size_kb'], df_test['size_mb'], df_test['size_gb'] = zip(*df_test['size'].apply(sizes))
测试结果如下:
Separate df.apply():
100 loops, best of 3: 1.43 ms per loop
Return Series:
100 loops, best of 3: 2.61 ms per loop
Return tuple:
1000 loops, best of 3: 819 µs per loop
答案 2 :(得分:13)
目前的一些回复工作正常,但我想提供另一个,也许更多" pandifyed"选项。这适用于我当前的 pandas 0.23 (不确定它是否适用于以前的版本):
import pandas as pd
df_test = pd.DataFrame([
{'dir': '/Users/uname1', 'size': 994933},
{'dir': '/Users/uname2', 'size': 109338711},
])
def sizes(s):
a = locale.format("%.1f", s['size'] / 1024.0, grouping=True) + ' KB'
b = locale.format("%.1f", s['size'] / 1024.0 ** 2, grouping=True) + ' MB'
c = locale.format("%.1f", s['size'] / 1024.0 ** 3, grouping=True) + ' GB'
return a, b, c
df_test[['size_kb', 'size_mb', 'size_gb']] = df_test.apply(sizes, axis=1, result_type="expand")
请注意,诀窍在result_type
的{{1}}参数上,会将其结果扩展为可以直接分配到新/旧列的apply
。
答案 3 :(得分:10)
最高答案之间的表现差异很大,并且Jesse和famaral42已经讨论了这一点,但是值得在最高答案之间进行公平的比较,并详细阐述Jesse答案的细微但重要的细节:传递给函数的参数也会影响性能。
(Python 3.7.4,Pandas 1.0.3)
import pandas as pd
import locale
import timeit
def create_new_df_test():
df_test = pd.DataFrame([
{'dir': '/Users/uname1', 'size': 994933},
{'dir': '/Users/uname2', 'size': 109338711},
])
return df_test
def sizes_pass_series_return_series(series):
series['size_kb'] = locale.format_string("%.1f", series['size'] / 1024.0, grouping=True) + ' KB'
series['size_mb'] = locale.format_string("%.1f", series['size'] / 1024.0 ** 2, grouping=True) + ' MB'
series['size_gb'] = locale.format_string("%.1f", series['size'] / 1024.0 ** 3, grouping=True) + ' GB'
return series
def sizes_pass_series_return_tuple(series):
a = locale.format_string("%.1f", series['size'] / 1024.0, grouping=True) + ' KB'
b = locale.format_string("%.1f", series['size'] / 1024.0 ** 2, grouping=True) + ' MB'
c = locale.format_string("%.1f", series['size'] / 1024.0 ** 3, grouping=True) + ' GB'
return a, b, c
def sizes_pass_value_return_tuple(value):
a = locale.format_string("%.1f", value / 1024.0, grouping=True) + ' KB'
b = locale.format_string("%.1f", value / 1024.0 ** 2, grouping=True) + ' MB'
c = locale.format_string("%.1f", value / 1024.0 ** 3, grouping=True) + ' GB'
return a, b, c
以下是结果:
# 1 - Accepted (Nels11 Answer) - (pass series, return series):
9.82 ms ± 377 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# 2 - Pandafied (jaumebonet Answer) - (pass series, return tuple):
2.34 ms ± 48.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# 3 - Tuples (pass series, return tuple then zip):
1.36 ms ± 62.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
# 4 - Tuples (Jesse Answer) - (pass value, return tuple then zip):
752 µs ± 18.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
请注意,返回元组是最快的方法,但是作为参数传递 in 的内容也会影响性能。代码上的差别很细微,但性能却有了显着提高。
测试4(传递一个值)的速度是测试3(传递一个值)的两倍,即使表面上执行的操作相同。
但是还有更多...
# 1a - Accepted (Nels11 Answer) - (pass series, return series, new columns exist):
3.23 ms ± 141 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# 2a - Pandafied (jaumebonet Answer) - (pass series, return tuple, new columns exist):
2.31 ms ± 39.3 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
# 3a - Tuples (pass series, return tuple then zip, new columns exist):
1.36 ms ± 58.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
# 4a - Tuples (Jesse Answer) - (pass value, return tuple then zip, new columns exist):
694 µs ± 3.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
在某些情况下(#1a和#4a),将函数应用于已存在输出列的DataFrame的速度要比从函数中创建输出列的速度快。
这是运行测试的代码:
# Paste and run the following in ipython console. It will not work if you run it from a .py file.
print('\nAccepted Answer (pass series, return series, new columns dont exist):')
df_test = create_new_df_test()
%timeit result = df_test.apply(sizes_pass_series_return_series, axis=1)
print('Accepted Answer (pass series, return series, new columns exist):')
df_test = create_new_df_test()
df_test = pd.concat([df_test, pd.DataFrame(columns=['size_kb', 'size_mb', 'size_gb'])])
%timeit result = df_test.apply(sizes_pass_series_return_series, axis=1)
print('\nPandafied (pass series, return tuple, new columns dont exist):')
df_test = create_new_df_test()
%timeit df_test[['size_kb', 'size_mb', 'size_gb']] = df_test.apply(sizes_pass_series_return_tuple, axis=1, result_type="expand")
print('Pandafied (pass series, return tuple, new columns exist):')
df_test = create_new_df_test()
df_test = pd.concat([df_test, pd.DataFrame(columns=['size_kb', 'size_mb', 'size_gb'])])
%timeit df_test[['size_kb', 'size_mb', 'size_gb']] = df_test.apply(sizes_pass_series_return_tuple, axis=1, result_type="expand")
print('\nTuples (pass series, return tuple then zip, new columns dont exist):')
df_test = create_new_df_test()
%timeit df_test['size_kb'], df_test['size_mb'], df_test['size_gb'] = zip(*df_test.apply(sizes_pass_series_return_tuple, axis=1))
print('Tuples (pass series, return tuple then zip, new columns exist):')
df_test = create_new_df_test()
df_test = pd.concat([df_test, pd.DataFrame(columns=['size_kb', 'size_mb', 'size_gb'])])
%timeit df_test['size_kb'], df_test['size_mb'], df_test['size_gb'] = zip(*df_test.apply(sizes_pass_series_return_tuple, axis=1))
print('\nTuples (pass value, return tuple then zip, new columns dont exist):')
df_test = create_new_df_test()
%timeit df_test['size_kb'], df_test['size_mb'], df_test['size_gb'] = zip(*df_test['size'].apply(sizes_pass_value_return_tuple))
print('Tuples (pass value, return tuple then zip, new columns exist):')
df_test = create_new_df_test()
df_test = pd.concat([df_test, pd.DataFrame(columns=['size_kb', 'size_mb', 'size_gb'])])
%timeit df_test['size_kb'], df_test['size_mb'], df_test['size_gb'] = zip(*df_test['size'].apply(sizes_pass_value_return_tuple))
答案 4 :(得分:3)
我相信1.1版会破坏此处最佳答案中建议的行为。
import pandas as pd
def test_func(row):
row['c'] = str(row['a']) + str(row['b'])
row['d'] = row['a'] + 1
return row
df = pd.DataFrame({'a': [1, 2, 3], 'b': ['i', 'j', 'k']})
df.apply(test_func, axis=1)
上面的代码在pandas 1.1.0上返回:
a b c d
0 1 i 1i 2
1 1 i 1i 2
2 1 i 1i 2
在熊猫1.0.5中,它返回:
a b c d
0 1 i 1i 2
1 2 j 2j 3
2 3 k 3k 4
我想这就是您所期望的。
不确定the release notes如何解释此行为,但是如here所述,通过复制原始行避免突变,可以恢复旧的行为。即:
def test_func(row):
row = row.copy() # <---- Avoid mutating the original reference
row['c'] = str(row['a']) + str(row['b'])
row['d'] = row['a'] + 1
return row
答案 5 :(得分:2)
真的很酷的答案!感谢Jesse和jaumebonet!只是关于以下方面的一些观察结果:
zip(* ...
... result_type="expand")
尽管expand有点更优雅(已填充),但 zip至少快** 2倍。在下面这个简单的示例中,我快了4倍。
import pandas as pd
dat = [ [i, 10*i] for i in range(1000)]
df = pd.DataFrame(dat, columns = ["a","b"])
def add_and_sub(row):
add = row["a"] + row["b"]
sub = row["a"] - row["b"]
return add, sub
df[["add", "sub"]] = df.apply(add_and_sub, axis=1, result_type="expand")
# versus
df["add"], df["sub"] = zip(*df.apply(add_and_sub, axis=1))
答案 6 :(得分:2)
使用 apply 和 lambda 实现这一点的相当快的方法。只需将多个值作为列表返回,然后使用 to_list()
import pandas as pd
dat = [ [i, 10*i] for i in range(100000)]
df = pd.DataFrame(dat, columns = ["a","b"])
def add_and_div(x):
add = x + 3
div = x / 3
return [add, div]
start = time.time()
df[['c','d']] = df['a'].apply(lambda x: add_and_div(x)).to_list()
end = time.time()
print(end-start) # output: 0.27606
答案 7 :(得分:1)
通常,要返回多个值,这就是我的工作
def gimmeMultiple(group):
x1 = 1
x2 = 2
return array([[1, 2]])
def gimmeMultipleDf(group):
x1 = 1
x2 = 2
return pd.DataFrame(array([[1,2]]), columns=['x1', 'x2'])
df['size'].astype(int).apply(gimmeMultiple)
df['size'].astype(int).apply(gimmeMultipleDf)
最终返回数据框有其特权,但有时不需要。您可以查看apply()
返回的内容并使用函数播放一些内容;)
答案 8 :(得分:1)
只是一种可读的方式。该代码将添加三个新列,并且其值将返回序列,而在apply函数中不使用任何参数。
def sizes(s):
val_kb = locale.format("%.1f", s['size'] / 1024.0, grouping=True) + ' KB'
val_mb = locale.format("%.1f", s['size'] / 1024.0 ** 2, grouping=True) + ' MB'
val_gb = locale.format("%.1f", s['size'] / 1024.0 ** 3, grouping=True) + ' GB'
return pd.Series([val_kb,val_mb,val_gb],index=['size_kb','size_mb','size_gb'])
df[['size_kb','size_mb','size_gb']] = df.apply(lambda x: sizes(x) , axis=1)
来自https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html
的一般示例df.apply(lambda x: pd.Series([1, 2], index=['foo', 'bar']), axis=1)
#foo bar
#0 1 2
#1 1 2
#2 1 2
答案 9 :(得分:0)
它提供了一个新数据框,其中包含原始列的两列。
import pandas as pd
df = ...
df_with_two_columns = df.apply(lambda row:pd.Series([row['column_1'], row['column_2']], index=['column_1', 'column_2']),axis = 1)