对于我的数据集,我想添加一些新列。这些列包含一个比率,该比率基于其他两个列。这是我的意思的示例:
import random
col1=[0,0,0,0,2,4,6,0,0,0,100,200,300,400]
col2=[0,0,0,0,4,6,8,0,0,0,200,900,400, 500]
d = {'Unit': [1, 1, 1, 1, 2, 2, 2, 3, 4, 5, 6, 6, 6, 6],
'Year': [2014, 2015, 2016, 2017, 2015, 2016, 2017, 2017, 2014, 2015, 2014, 2015, 2016, 2017], 'col1' : col1, 'col2' : col2 }
df = pd.DataFrame(data=d)
new_df = df.groupby(['Unit', 'Year']).sum()
new_df['col1/col2'] = (new_df.groupby(level=0, group_keys=False)
.apply(lambda x: x.col1/x.col2.shift())
)
col1 col2 col1/col2
Unit Year
1 2014 0 0 NaN
2015 0 0 NaN
2016 0 0 NaN
2017 0 0 NaN
2 2015 2 4 NaN
2016 4 6 1.000000
2017 6 8 1.000000
3 2017 0 0 NaN
4 2014 0 0 NaN
5 2015 0 0 NaN
6 2014 100 200 NaN
2015 200 900 1.000000
2016 300 400 0.333333
2017 400 500 1.000000
但是,这是超级简化的df。实际上,我的第1至50列是cols。我现在所做的感觉超级低效:
col1=[0,0,0,0,2,4,6,0,0,0,100,200,300,400]
col2=[0,0,0,0,4,6,8,0,0,0,200,900,400, 500]
col3=[0,0,0,0,4,6,8,0,0,0,200,900,400, 500]
col4=[0,0,0,0,4,6,8,0,0,0,200,900,400, 500]
col5=[0,0,0,0,4,6,8,0,0,0,200,900,400, 500]
col6=[0,0,0,0,4,6,8,0,0,0,200,900,400, 500]
# data in all cols is the same, just for example.
d = {'Unit': [1, 1, 1, 1, 2, 2, 2, 3, 4, 5, 6, 6, 6, 6],
'Year': [2014, 2015, 2016, 2017, 2015, 2016, 2017, 2017, 2014, 2015, 2014, 2015, 2016, 2017], 'col1' : col1, 'col2' : col2, 'col3' : col3, 'col4' : col4, 'col5' : col5, 'col6' : col6}
df = pd.DataFrame(data=d)
new_df = df.groupby(['Unit', 'Year']).sum()
new_df['col1/col2'] = (new_df.groupby(level=0, group_keys=False)
.apply(lambda x: x.col1/x.col2.shift())
)
new_df['col3/col4'] = (new_df.groupby(level=0, group_keys=False)
.apply(lambda x: x.col3/x.col4.shift())
)
new_df['col5/col6'] = (new_df.groupby(level=0, group_keys=False)
.apply(lambda x: x.col5/x.col6.shift())
)
我执行了25次创建新列的方法。能做到这一点更高效/
先谢谢您
Jen
答案 0 :(得分:1)
列表cols2
中的所有列均使用DataFrameGroupBy.shift
,并按列表cols1
除以过滤后的DataFrame:
col1=[0,0,0,0,2,4,6,0,0,0,100,200,300,400]
col2=[0,0,0,0,4,6,8,0,0,0,200,900,400, 500]
d = {'Unit': [1, 1, 1, 1, 2, 2, 2, 3, 4, 5, 6, 6, 6, 6],
'Year': [2014, 2015, 2016, 2017, 2015, 2016, 2017, 2017, 2014, 2015, 2014, 2015, 2016, 2017],
'col1' : col1, 'col2' : col2 ,
'col3' : col1, 'col4' : col2 ,
'col5' : col1, 'col6' : col2 }
df = pd.DataFrame(data=d)
new_df = df.groupby(['Unit', 'Year']).sum()
cols1 = ['col1','col3','col5']
cols2 = ['col2','col4','col6']
new_df = new_df[cols1] / new_df.groupby(level=0)[cols2].shift().values
new_df.columns = [f'{a}/{b}' for a, b in zip(cols1, cols2)]
print (new_df)
col1/col2 col3/col4 col5/col6
Unit Year
1 2014 NaN NaN NaN
2015 NaN NaN NaN
2016 NaN NaN NaN
2017 NaN NaN NaN
2 2015 NaN NaN NaN
2016 1.000000 1.000000 1.000000
2017 1.000000 1.000000 1.000000
3 2017 NaN NaN NaN
4 2014 NaN NaN NaN
5 2015 NaN NaN NaN
6 2014 NaN NaN NaN
2015 1.000000 1.000000 1.000000
2016 0.333333 0.333333 0.333333
2017 1.000000 1.000000 1.000000
答案 1 :(得分:0)
您是否考虑过使用Numpy?熊猫实际上是基于Numpy的, 这就是为什么它这么快的原因。 DF很棒,但是对于更深入或更复杂的操作,我只需将其转换为Numpy,然后使用它并转换回熊猫:
...
new_df = df.groupby(['Unit', 'Year']).sum()
new_array = new_df.values
print(type(new_array))
[out]: <type 'numpy.ndarray'>
祝你好运