我有10个不同的子目录,每个目录中有相同的文件名(每个目录20个文件),第0列是每个文件中的索引列。
例如
**strong text**DIRECTORY A
- data_20170101_k.csv
- data_20170102_k.csv
- data_20170102_k.csv
- data_20170103_k.csv
- data_20170104_k.csv
- data_20170105_k.csv
.....
.....
- data_20170120_k.csv
**DIRECTORY B**
- data_20170101_k.csv
- data_20170102_k.csv
- data_20170102_k.csv
- data_20170103_k.csv
- data_20170104_k.csv
- data_20170105_k.csv
.....
.....
- data_20170120_k.csv
**DIRECTORY C**
- data_20170101_k.csv
- data_20170102_k.csv
- data_20170102_k.csv
- data_20170103_k.csv
- data_20170104_k.csv
- data_20170105_k.csv
.....
.....
- data_20170120_k.csv
Each of the above files contains 6 columns and index_col = 0 with NO
column headers
**DIRECTORY FILES_MERGED**
- data_20170101_k.csv
- data_20170102_k.csv
- data_20170102_k.csv
- data_20170103_k.csv
- data_20170104_k.csv
- data_20170105_k.csv
.....
.....
- data_20170120_k.csv
我想将所有文件与EACH子目录中的SAME NAME合并 使用SAME NAME将1个文件保存到一个NEW子目录中 例如目录FILES_MERGED与INDEX =第0列。合并文件 只有一个索引列,每个文件有1,2,3,4,5列 每个目录中的相同名称
我已将csv文件读入pandas数据帧
df= pd.read_csv(filename, sep=",", header = None, usecols=[0, 1, 2, 3, 4, 5])
这是dataframe的格式
我最初的原始数据框:
0 1 2 3 4 5
0 1451606820 1.0862 1.08630 1.08578 1.08578 25
1 1451608800 1.0862 1.08630 1.08578 1.08610 10
2 1451608860 1.0862 1.08620 1.08578 1.08578 16
3 1451610180 1.0862 1.08630 1.08578 1.08578 27
4 1451610480 1.0858 1.08590 1.08560 1.08578 21
5 1451610540 1.0857 1.08578 1.08570 1.08578 2
6 1451610600 1.0857 1.08578 1.08570 1.08578 2
7 1451610720 1.0857 1.08578 1.08570 1.08578 2
8 1451610780 1.0857 1.08578 1.08570 1.08578 2
Column '0' = Datetime in Epoch time
Columns 1,2,3,4,5 are values
答案 0 :(得分:1)
有很多方法可以做到这一点,留在熊猫我做了以下几点。
使用文件结构
root/
├── dir1/
│ ├── data_20170101_k
│ ├── data_20170102_k
│ ├── ...
├── dir2/
│ ├── data_20170101_k
│ └── data_20170101_k
│ └── ...
└── ...
这段代码可行,但解释时有点冗长,但您可以缩短实施时间。
import glob
import pandas as pd
CONCAT_DIR = "/FILES_CONCAT/"
# Use glob module to return all csv files under root directory. Create DF from this.
files = pd.DataFrame([file for file in glob.glob("root/*/*")], columns=["fullpath"])
# fullpath
# 0 root\dir1\data_20170101_k.csv
# 1 root\dir1\data_20170102_k.csv
# 2 root\dir2\data_20170101_k.csv
# 3 root\dir2\data_20170102_k.csv
# Split the full path into directory and filename
files_split = files['fullpath'].str.rsplit("\\", 1, expand=True).rename(columns={0: 'path', 1:'filename'})
# path filename
# 0 root\dir1 data_20170101_k.csv
# 1 root\dir1 data_20170102_k.csv
# 2 root\dir2 data_20170101_k.csv
# 3 root\dir2 data_20170102_k.csv
# Join these into one DataFrame
files = files.join(files_split)
# fullpath path filename
# 0 root\dir1\data_20170101_k.csv root\dir1 data_20170101_k.csv
# 1 root\dir1\data_20170102_k.csv root\dir1 data_20170102_k.csv
# 2 root\dir2\data_20170101_k.csv root\dir2 data_20170101_k.csv
# 3 root\dir2\data_20170102_k.csv root\dir2 data_20170102_k.csv
# Iterate over unique filenames; read CSVs, concat DFs, save file
for f in files['filename'].unique():
paths = files[files['filename'] == f]['fullpath'] # Get list of fullpaths from unique filenames
dfs = [pd.read_csv(path, header=None) for path in paths] # Get list of dataframes from CSV file paths
concat_df = pd.concat(dfs) # Concat dataframes into one
concat_df.to_csv(CONCAT_DIR + f) # Save dataframe
答案 1 :(得分:0)
这可以在shell中以非常简单的方式实现:
find . -name "*.csv" | xargs cat > mergedCSV
(注意:请勿在扩展名中使用.csv,因为这会导致与find不一致。完成此命令后,可以将文件重命名为.csv