r读入多个.dat文件

时间:2014-04-24 14:07:16

标签: r file sqldf

嗨,我是新来的,是R的初学者,

我的问题: 如果我有多个文件(test1.dat,test2.dat,...)在R中使用我使用此代码来读取它们

filelist <- list.files(pattern = "*.dat")

df_list <- lapply(filelist, function(x) read.table(x, header = FALSE, sep = ","
                                               ,colClasses = "factor", comment.char = "", 
                                               col.names = "raw"))

现在我遇到的问题是我的数据很大,我找到了一个使用sqldf-package来加快速度的解决方案:

sql <- file("test2.dat")
df <- sqldf("select * from sql", dbname = tempfile(),
                    file.format = list(header = FALSE, row.names = FALSE, colClasses = "factor", 
                                       comment.char = "", col.names ="raw"))

它适用于一个文件,但我无法将代码更改为读入多个文件,就像在第一个代码段中一样。有人能帮我吗?谢谢!莫莫

2 个答案:

答案 0 :(得分:1)

这似乎有效(但我认为有更快的sql方式)

sql.l <- lapply(filelist , file)

df_list2 <- lapply(sql.l, function(i) sqldf("select * from i" ,  
    dbname = tempfile(),  file.format = list(header = TRUE, row.names = FALSE)))


查看速度 - 部分取自mnel的帖子Quickly reading very large tables as dataframes in R

library(data.table)
library(sqldf)

# test data
n=1e6
DT = data.table( a=sample(1:1000,n,replace=TRUE),
                 b=sample(1:1000,n,replace=TRUE),
                 c=rnorm(n),
                 d=sample(c("foo","bar","baz","qux","quux"),n,replace=TRUE),
                 e=rnorm(n),
                 f=sample(1:1000,n,replace=TRUE) )

# write 5 files out
lapply(1:5, function(i) write.table(DT,paste0("test", i, ".dat"), 
                                 sep=",",row.names=FALSE,quote=FALSE))

阅读: data.table

filelist <- list.files(pattern = "*.dat")

system.time(df_list <- lapply(filelist, fread))

#  user  system elapsed 
# 5.244   0.200   5.457 

阅读: sqldf

sql.l <- lapply(filelist , file)

 system.time(df_list2 <- lapply(sql.l, function(i) sqldf("select * from i" ,  
   dbname = tempfile(),  file.format = list(header = TRUE, row.names = FALSE))))

#    user  system elapsed 
#  35.594   1.432  37.357 

检查 - 除了属性

外似乎没问题
all.equal(df_list , df_list2)

答案 1 :(得分:0)

以某种方式lappy()对我不起作用。

map_df()对我有效,可以合并7000多个.dat文件。还跳过了每个文件的第一行,并过滤了“ V1”列

rawDATfile.list <- list.files(pattern="*.DAT")

data <- rawDATfile.list%>%
  map_dfr(~read.delim(.x, header = FALSE, sep=";", skip=1, quote = "\"'")%>%
            mutate_all(as.character))%>%
  filter(V1=="B")