我的文件夹中有一堆csv,格式如下:
chunk timecodes chunk_completed chunk_id diffs_avg sd
0 [53] [[45930]] [45930] 53
1 [53, 50] [[45930], [46480]] [46480] 53-50 550.0
2 [53, 50, 63] [[45930], [46480], [47980]] [47980] 53-50-63 1025.0 671.7514421272201
3 [53, 50, 63, 60] [[45930], [46480], [47980], [49360]] [49360] 53-50-63-60 1143.3333333333333 517.3329037798903
4 [53, 50, 63, 60, 73] [[45930], [46480], [47980], [49360], [50040]] [50040] 53-50-63-60-73 1027.5 481.75893003313035
5 [53, 50, 63, 60, 73, 70] [[45930], [46480], [47980], [49360], [50040], [50310]] [50310] 53-50-63-60-73-70 876.0 537.4290650867331
6 [50] [[46480]] [46480] 50
7 [50, 63] [[46480], [47980]] [47980] 50-63 1500.0
8 [50, 63, 60] [[46480], [47980], [49360]] [49360] 50-63-60 1440.0 84.8528137423857
9 [50, 63, 60, 73] [[46480], [47980], [49360], [50040]] [50040] 50-63-60-73 1186.6666666666667 442.86943147313
我将它们读为DF,并列出了DF:
csvs = []
list_of_files = [i for i in glob.glob('*.{}'.format('csv'))]
for file in list_of_files:
f = pd.read_csv(file)
csvs.append(f)
我想做的是将它们减少到一个数据帧,而不会重复“ chunk_id”。相反,我想合并这个ID。
我尝试过:
from functools import reduce
red = reduce(pd.merge, csvs)
这给了我一个非常宽的数据框架,没有任何条目。
我还没有尝试过求平均值,但是我想得到一个数据帧,该数据帧具有与上面的示例完全相同的列,但是所有数据帧中具有相同“ chunk_id”的每一行都被合并了,但是平均它们的“ diffs_avg”,“时间代码”,“ chunk_completed”和“ sd”列。
所以,如果我读过以下dfs:
DF1
chunk timecodes chunk_completed chunk_id diffs_avg sd
[60 62] [100, 200] 500 60-62 2 1
[58 53] [800, 900] 1000 58-53 4 6
DF2
chunk timecodes chunk_completed chunk_id diffs_avg sd
[60 62] [200, 400] 1000 60-62 4 2
[30 33] [200, 700] 800 30-33 6 7
结果:
*[60 62] [150, 300] 750 60-62 3 1.5*
[58 53] [800, 900] 1000 58-53 4 6
[30 33] [200, 700] 800 30-33 6 7
可复制的DF:
{'chunk': {0: '[53]',
1: '[53, 50]',
2: '[53, 50, 63]',
3: '[53, 50, 63, 60]',
4: '[53, 50, 63, 60, 73]',
5: '[53, 50, 63, 60, 73, 70]',
6: '[50]',
7: '[50, 63]',
8: '[50, 63, 60]',
9: '[50, 63, 60, 73]'},
'chunk_completed': {0: '[45930]',
1: '[46480]',
2: '[47980]',
3: '[49360]',
4: '[50040]',
5: '[50310]',
6: '[46480]',
7: '[47980]',
8: '[49360]',
9: '[50040]'},
'chunk_id': {0: '53',
1: '53-50',
2: '53-50-63',
3: '53-50-63-60',
4: '53-50-63-60-73',
5: '53-50-63-60-73-70',
6: '50',
7: '50-63',
8: '50-63-60',
9: '50-63-60-73'},
'diffs_avg': {0: np.nan,
1: 550.0,
2: 1025.0,
3: 1143.3333333333333,
4: 1027.5,
5: 876.0,
6: np.nan,
7: 1500.0,
8: 1440.0,
9: 1186.6666666666667},
'sd': {0: np.nan,
1: np.nan,
2: 671.7514421272201,
3: 517.3329037798903,
4: 481.75893003313035,
5: 537.4290650867331,
6: np.nan,
7: np.nan,
8: 84.8528137423857,
9: 442.86943147313},
'timecodes': {0: '[[45930]]',
1: '[[45930], [46480]]',
2: '[[45930], [46480], [47980]]',
3: '[[45930], [46480], [47980], [49360]]',
4: '[[45930], [46480], [47980], [49360], [50040]]',
5: '[[45930], [46480], [47980], [49360], [50040], [50310]]',
6: '[[46480]]',
7: '[[46480], [47980]]',
8: '[[46480], [47980], [49360]]',
9: '[[46480], [47980], [49360], [50040]]'}}
答案 0 :(得分:1)
在不知道您的timecodes
列及其类型的情况下,您可以使用pandas.DataFrame.groupby
和chunk_id
.agg
进行平均
# First of all you should concat your csv's into one big dataframe:
df3 = pd.concat(csvs, axis=0, ignore_index=True)
# First we concat df1 & df2 which is the appending of the CSV's
# Note this is a simulation of your csv's
df3 = pd.concat([df1,df2], ignore_index=True)
print(df3)
chunk timecodes chunk_completed chunk_id diffs_avg sd
0 [60 62] [100, 200] 500 60-62 2 1
1 [58 53] [800, 900] 1000 58-53 4 6
2 [60 62] [200, 400] 1000 60-62 4 2
3 [30 33] [200, 700] 800 30-33 6 7
现在我们可以与groupby进行聚合
df_grouped = df3.groupby('chunk_id').agg({'chunk_completed':'mean',
'diffs_avg':'mean',
'sd':'mean'}).reset_index()
print(df_grouped)
chunk_id chunk_completed diffs_avg sd
0 30-33 800 6 7.0
1 58-53 1000 4 6.0
2 60-62 750 3 1.5