流处理R中的大型csv文件

时间:2017-02-27 17:43:04

标签: r csv chunks

我需要对一个非常大的csv文件(c.8.5GB)进行一些相对简单的更改。我最初尝试使用各种读取器函数:read.csv,readr :: read.csv,data.table :: fread。但是:它们都耗尽了内存。

我想我需要使用流处理方法;读取一个块,更新它,写入,重复。我发现this answer位于右侧;但是我不知道如何终止循环(我对R来说比较新)。

所以我有两个问题:

  1. 使while循环工作的正确方法是什么?
  2. 有没有更好的方法(对于'更好'的某些定义)?例如是否有一些方法可以使用dplyr&管?
  3. 目前的代码如下:

    src_fname <- "testdata/model_input.csv"
    tgt_fname <- "testdata/model_output.csv"
    
    #Changes needed in file: rebase identifiers, set another col to constant value
    rebase_data <- function(data, offset) {
      data$'Unique Member ID' <- data$'Unique Member ID' - offset
      data$'Client Name' <- "TestClient2"
      return(data)
    }
    
    CHUNK_SIZE <- 1000
    src_conn = file(src_fname, "r")
    data <- read.csv(src_conn, nrows = CHUNK_SIZE, check.names=FALSE)
    cols <- colnames(data)
    offset <- data$'Unique Member ID'[1] - 1
    
    data <- rebase_data(data, offset)
    #1st time through, write the headers
    tgt_conn = file(tgt_fname, "w")
    write.csv(data,tgt_conn, row.names=FALSE)
    
    #loop over remaining data
    end = FALSE
    while(end == FALSE) {
      data <- read.csv(src_conn, nrows = CHUNK_SIZE, check.names=FALSE, col.names = cols)
      data <- rebase_data(data, offset)
      #write.csv doesn't support col.names=FALSE; so use write.table which does
      write.table(data, tgt_conn, row.names=FALSE, col.names=FALSE, sep=",")
      # ??? How to test for EOF and set end = TRUE if so  ???
      # This doesn't work, presumably because nrow() != CHUNK_SIZE on final loop?
      if (nrow(data) < CHUNK_SIZE) {
        end <- TRUE
      }
    
    }
    close(src_conn)
    close(tgt_conn)
    

    感谢您的任何指示。

3 个答案:

答案 0 :(得分:1)

试试这个:

library("chunked")

read_chunkwise(src_fname, chunk_size=CHUNK_SIZE) %>%
rebase_data(offset) %>%
write_chunkwise(tgt_fname)

您可能需要对字母进行一些调整以获得您想要的内容。

(免责声明:没有尝试过代码)

请注意,包中没有插图但标准用法在github上有描述:https://github.com/edwindj/chunked/

答案 1 :(得分:0)

好的,我找到了一个解决方案,如下:

# src_fname <- "testdata/model_input.csv"
# tgt_fname <- "testdata/model_output.csv"

CHUNK_SIZE <- 20000

#Changes needed in file: rebase identifiers, set another col to constant value
rebase_data <- function(data, offset) {
  data$'Unique Member ID' <- data$'Unique Member ID' - offset
  data$'Client Name' <- "TestClient2"
  return(data)
}

#--------------------------------------------------------
# Get the structure first to speed things up
#--------------------------------------------------------
structure <- read.csv(src_fname, nrows = 2, check.names = FALSE)
cols <- colnames(structure)
offset <- structure$'Unique Member ID'[1] - 1

#Open the input & output files for reading & writing
src_conn = file(src_fname, "r")
tgt_conn = file(tgt_fname, "w")

lines_read <- 0
end <- FALSE
read_header <- TRUE
write_header <- TRUE
while(end == FALSE) {
  data <- read.csv(src_conn, nrows = CHUNK_SIZE, check.names=FALSE, col.names = cols, header = read_header)
  if (nrow(data) > 0) {
    lines_read <- lines_read + nrow(data)
    print(paste0("lines read this chunk: ", nrow(data), ", lines read so far: ", lines_read))
    data <- rebase_data(data, offset)
    #write.csv doesn't support col.names=FALSE; so use write.table which does
    write.table(data, tgt_conn, row.names=FALSE, col.names=write_header, sep = ",")
  }
  if (nrow(data) < CHUNK_SIZE) {
    end <- TRUE
  }
  read_header <- FALSE
  write_header <- FALSE
}
close(src_conn)
close(tgt_conn)

答案 2 :(得分:0)

很抱歉戳了2年的线程,但是现在在setHasOptionsMenu(true);内,我们还可以这样做:

tidyverse

这里最棘手的部分是根据参数require(tidyverse) src_fname = "testdata/model_input.csv" tgt_fname = "testdata/model_output.csv" CHUNK_SIZE = 1000 offset = read_csv(src_fname, n_max=1)$comm_code %>% as.numeric() - 1 rebase.chunk = function(df, pos) { df$comm_code = df$comm_code %>% as.numeric() - offset df$'Client Name' = "TestClient2" is.append = ifelse(pos > 1, T, F) df %>% write_csv( tgt_fname, append=is.append ) } read_csv_chunked( src_fname, callback=SideEffectChunkCallback$new(rebase.chunk), chunk_size = chunck.size, progress = T # optional, show progress bar ) 设置is.append,该参数表示原始文件中数据帧pos的起始行号。在readr::write_csv中,当df时,标头(列名)将被写入文件,否则不会被写入。