使用以下设置将100个CSV文件与标题组合成一个文件的最快方法是什么:
(包含详细设置以使问题的范围更具体。根据the feedback here进行了更改)
档案1.csv:
a,b
1,2
文件2.csv:
a,b
3,4
最终out.csv:
a,b
1,2
3,4
根据我的基准测试,所有提出的方法中最快的是纯python。有没有更快的方法?
基准(使用评论和帖子中的方法更新):
Method Time
pure python 0.298s
sed 1.9s
awk 2.5s
R data.table 4.4s
R data.table with colClasses 4.4s
Spark 2 40.2s
python pandas 1min 11.0s
工具版本:
sed 4.2.2
awk: mawk 1.3.3 Nov 1996
Python 3.6.1
Pandas 0.20.1
R 3.4.0
data.table 1.10.4
Spark 2.1.1
Jupyter笔记本中的代码:
sed的:
%%time
!head temp/in/1.csv > temp/merged_sed.csv
!sed 1d temp/in/*.csv >> temp/merged_sed.csv
纯Python所有二进制读写,未记录的行为" next":
%%time
with open("temp/merged_pure_python2.csv","wb") as fout:
# first file:
with open("temp/in/1.csv", "rb") as f:
fout.write(f.read())
# now the rest:
for num in range(2,101):
with open("temp/in/"+str(num)+".csv", "rb") as f:
next(f) # skip the header
fout.write(f.read())
AWK:
%%time
!awk 'NR==1; FNR==1{{next}} 1' temp/in/*.csv > temp/merged_awk.csv
R data.table:
%%time
%%R
filenames <- paste0("temp/in/",list.files(path="temp/in/",pattern="*.csv"))
files <- lapply(filenames, fread)
merged_data <- rbindlist(files, use.names=F)
fwrite(merged_data, file="temp/merged_R_fwrite.csv", row.names=FALSE)
带有colClasses的R data.table:
%%time
%%R
filenames <- paste0("temp/in/",list.files(path="temp/in/",pattern="*.csv"))
files <- lapply(filenames, fread,colClasses=c(
V1="integer",
V2="integer",
V3="integer",
V4="integer",
V5="integer",
V6="integer",
V7="integer",
V8="integer",
V9="integer",
V10="integer"))
merged_data <- rbindlist(files, use.names=F)
fwrite(merged_data, file="temp/merged_R_fwrite.csv", row.names=FALSE)
Spark(pyspark):
%%time
df = spark.read.format("csv").option("header", "true").load("temp/in/*.csv")
df.coalesce(1).write.option("header", "true").csv("temp/merged_pyspark.csv")
Python pandas:
%%time
import pandas as pd
interesting_files = glob.glob("temp/in/*.csv")
df_list = []
for filename in sorted(interesting_files):
df_list.append(pd.read_csv(filename))
full_df = pd.concat(df_list)
full_df.to_csv("temp/merged_pandas.csv", index=False)
数据由:
生成%%R
df=data.table(replicate(10,sample(0:9,100000,rep=TRUE)))
for (i in 1:100){
write.csv(df,paste0("temp/in/",i,".csv"), row.names=FALSE)
}
答案 0 :(得分:1)
sed
可能是最快的。我还建议awk
替代
awk 'NR==1; FNR==1{next} 1' file* > output
从第一个文件打印第一行,然后从其余文件中跳过所有其他第一行。
时序:
我尝试了10,000行长100个文件,每个文件围绕 200MB (不确定)。这是我服务器上最差的时间。
real 0m0.429s
user 0m0.360s
sys 0m0.068s
服务器规格(小怪兽)
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 2
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
Stepping: 2
CPU MHz: 2394.345
BogoMIPS: 4789.86
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 15360K
NUMA node0 CPU(s): 0-11
答案 1 :(得分:1)
根据该问题的基准测试,最快的方法是具有未记录的带有二进制文件的“ next()”函数行为的纯Python。该方法由Stefan Pochmann
提出基准:
基准(使用评论和帖子中的方法更新):
Method Time
pure python 0.298s
sed 1.9s
awk 2.5s
R data.table 4.4s
R data.table with colClasses 4.4s
Spark 2 40.2s
python pandas 1min 11.0s
工具版本:
sed 4.2.2
awk: mawk 1.3.3 Nov 1996
Python 3.6.1
Pandas 0.20.1
R 3.4.0
data.table 1.10.4
Spark 2.1.1
纯Python代码:
with open("temp/merged_pure_python2.csv","wb") as fout:
# first file:
with open("temp/in/1.csv", "rb") as f:
fout.write(f.read())
# now the rest:
for num in range(2,101):
with open("temp/in/"+str(num)+".csv", "rb") as f:
next(f) # skip the header
fout.write(f.read())