复杂的行和列操纵熊猫

时间:2015-06-26 18:47:37

标签: python file csv pandas time-series

我正在尝试同时执行行和列操作。我有时间序列的数据。我确实在这里和文档中检查了几乎所有的例子,但没有太多的运气,而且比以前更加困惑。

我在同一路径中有两个文件

col[last]

在执行以下操作时,预期输出正在将file_1.csv file_2.csvNos=123计数合并为3

  1. file_2.csv中取624/3 = 20800:00:00次的值,然后除以相应的值Nos

  2. 现在通过在新列中添加与col[last]相对应的同一行file_1.csv的值来添加此新值,该列的标题为208+20=228 file_2.csv 。即File_2.csv Nos,00:00:00,12:00:00 123,20,228 123,20,228 123,20,228 125,50,82/83 #float to be rounded off 125,50,82/83 567,500,2004 #float rounded off 567,500,2004 567,500,2004 567,500,2004 567,500,2004

  3. 现在附加的for (i = 0; i < n; i++) { for (int j = i + 1; j < n; j++) // <--- Change here { ... } } 看起来像是:

    $(document).ready(function(){
            //It will only run when the browser has the image's loaded height
            $('.listingImage1 img').load(function() {
                $('.listing .listingContent').height($('.listingImage1 img').height() + 50);
            })
    });
    

    从哪里开始,这看起来非常复杂。任何有关编写代码的建议都将是巨大的帮助。提前致谢。

1 个答案:

答案 0 :(得分:1)

将两个数据帧合并为一个:

In [34]: df3 = pd.merge(df2, df1[['Nos', '12:00:00']], on=['Nos'], how='left')

In [35]: df3
Out[35]: 
   Nos  00:00:00  12:00:00
0  123        20       624
1  123        20       624
2  123        20       624
3  125        50        65
4  125        50        65
5  567       500      7522
6  567       500      7522
7  567       500      7522
8  567       500      7522
9  567       500      7522

然后您可以执行groupby/transform来计算每组中有多少项:

count = df3.groupby(['Nos'])['12:00:00'].transform('count')

您希望计算的值可以表示为

df3['12:00:00'] = df3['00:00:00'] + df3['12:00:00']/count 

例如,

import pandas as pd
df1 = pd.read_csv('File_1.csv')
df2 = pd.read_csv('File_2.csv')

last1, last2 = df1.columns[-1], df2.columns[-1]
df3 = pd.merge(df2, df1[['Nos', last1]], on=['Nos'], how='left')

count = df3.groupby(['Nos'])[last1].transform('count')
df3[last1] = df3[last2] + df3[last1]/count 
print(df3)

产量

   Nos  00:00:00  12:00:00
0  123        20     228.0
1  123        20     228.0
2  123        20     228.0
3  125        50      82.5
4  125        50      82.5
5  567       500    2004.4
6  567       500    2004.4
7  567       500    2004.4
8  567       500    2004.4
9  567       500    2004.4

或者,您可以使用

df3[last1] = df3.groupby(['Nos']).apply(lambda x: x[last2] + x[last1]/len(x) ).values

而不是

count = df3.groupby(['Nos'])[last1].transform('count')
df3[last1] = df3[last2] + df3[last1]/count 

然而,由于groupby/apply正在为每个组进行一次加法和除法,因此速度较慢,而

df3[last1] = df3[last2] + df3[last1]/count 

正在对整列执行添加和除法。如果有很多组,性能上的差异可能很大。将两个数据帧合并为一个:

In [34]: df3 = pd.merge(df2, df1[['Nos', '12:00:00']], on=['Nos'], how='left')

In [35]: df3
Out[35]: 
   Nos  00:00:00  12:00:00
0  123        20       624
1  123        20       624
2  123        20       624
3  125        50        65
4  125        50        65
5  567       500      7522
6  567       500      7522
7  567       500      7522
8  567       500      7522
9  567       500      7522

然后您可以执行groupby/transform来计算每组中有多少项:

count = df3.groupby(['Nos'])['12:00:00'].transform('count')

您希望计算的值可以表示为

df3['12:00:00'] = df3['00:00:00'] + df3['12:00:00']/count 

例如,

import pandas as pd
df1 = pd.read_csv('File_1.csv')
df2 = pd.read_csv('File_2.csv')

last1, last2 = df1.columns[-1], df2.columns[-1]
df3 = pd.merge(df2, df1[['Nos', last1]], on=['Nos'], how='left')

count = df3.groupby(['Nos'])[last1].transform('count')
df3[last1] = df3[last2] + df3[last1]/count 
print(df3)

产量

   Nos  00:00:00  12:00:00
0  123        20     228.0
1  123        20     228.0
2  123        20     228.0
3  125        50      82.5
4  125        50      82.5
5  567       500    2004.4
6  567       500    2004.4
7  567       500    2004.4
8  567       500    2004.4
9  567       500    2004.4

或者,您可以使用

df3[last1] = df3.groupby(['Nos']).apply(lambda x: x[last2] + x[last1]/len(x) ).values

而不是

count = df3.groupby(['Nos'])[last1].transform('count')
df3[last1] = df3[last2] + df3[last1]/count 

然而,由于groupby/apply正在为每个组进行一次加法和除法,因此速度较慢,而

df3[last1] = df3[last2] + df3[last1]/count 

正在对整列执行添加和除法。如果有很多组,性能差异可能很大:

In [52]: df3 = pd.concat([df3]*1000)
In [56]: df3['Nos'] = np.random.randint(1000, size=len(df3))

In [57]: %timeit using_transform(df3)
100 loops, best of 3: 6.49 ms per loop

In [58]: %timeit using_apply(df3)
1 loops, best of 3: 270 ms per loop
相关问题