我正面临以下挑战:
我的Python项目目录中有300个不同的CSV文件,它们都具有不同的结构(即不同的列),并希望将所有这些文件合并为一个统一的CSV文件。
让我举一个2文件示例:
marketcap.csv:
marketcap,ticker
1000,AAPL
2000,TSLA
3000,OSTK
revenue.csv:
revenue,ticker
2000,AAPL
300,MDXG
统一的csv文件的结构应如下:
consolidated.csv:
marketcap,revenue,ticker
1000,2000,AAPL
2000,0,TSLA
3000,0,OSTK
0,300,MDXG
我有300个不同列的完整列表(众所周知),并且有300个结果CSV文件。预先报价器未知。从上面的示例中您可以看到,每个文件中的可用代码可能会有所不同,即,如果一个文件中未列出代码,则相应数据点应自动获得0,例如收入,在合并文件中。
我搜索了stackoverflow,但是没有找到这个特定的问题。感谢您的帮助和有关解决此问题的想法。
答案 0 :(得分:0)
对于当前示例,使用pandas数据框的单线效果很好。您需要为每个文件提供公用列,以查看这300个文件的工作方式。
当您知道文件中的通用列时:
# Create dataframes from csv:
market = pd.read_csv("filepath/market.csv")filepath/market.csv")
revenue = pd.read_csv("filepath/revenue.csv")
# Merge both files using pd.merge
consolidated = market.merge(revenue,how='outer', on='ticker').fillna(value=0)
# This gives a full merge of both csv and fillna replaces null values with '0'
这段代码在合并之前在两个数据框中搜索公共列。
import glob
import pandas as pd
directory = 'C:/Test' # specify the directory containing the 300 files
filelist = sorted (glob.glob(directory + '/*.csv')) # reads all 300 files in the directory and stores as a list
consolidated = pd.DataFrame() # Create a new empty dataframe for consolidation
for file in filelist: # Iterate through each of the 300 files
df1 = pd.read_csv(file) # create df using the file
df1col = list (df1.columns) # save columns to a list
df2 = consolidated # set the consolidated as your df2
df2col = list (df2.columns) # save columns from consolidated result as list
commoncol = [i for i in df1col for j in df2col if i==j] # Check both lists for common column name
# print (commoncol)
if commoncol == []: # In first iteration, consolidated file is empty, which will return in a blank df
consolidated = pd.concat([df1, df2], axis=1).fillna(value=0) # concatenate (outer join) with no common columns replacing null values with 0
else:
consolidated = df1.merge(df2,how='outer', on=commoncol).fillna(value=0) # merge both df specifying the common column and replace null values with 0
# print (consolidated) << Optionally, check the consolidated df at each iteration
# writing consolidated df to another CSV
consolidated.to_csv('C:/<filepath>/consolidated.csv', header=True, index=False)