如何在不从RODBC包创建文本文件的情况下批量插入SQLServer中的数据?

时间:2016-06-07 20:25:35

标签: r rodbc

这个问题是这个问题的延伸How to quickly export data from R to SQL Server。目前我使用以下代码:

 # DB Handle  for config file #
   dbhandle <- odbcDriverConnect()

 # save the data in the table finally
   sqlSave(dbhandle, bp, "FACT_OP", append=TRUE, rownames=FALSE, verbose = verbose, fast = TRUE)
 # varTypes <-  c(Date="datetime", QueryDate = "datetime")
 # sqlSave(dbhandle, bp, "FACT_OP",  rownames=FALSE,verbose = TRUE, fast = TRUE, varTypes=varTypes)

 # DB handle close
  odbcClose(dbhandle)

我也尝试过这种方法,它工作得非常好,而且我也获得了很快的速度。

 toSQL = data.frame(...);
 write.table(toSQL,"C:\\export\\filename.txt",quote=FALSE,sep=",",row.names=FALSE,col.names=FALSE,append=FALSE);
sqlQuery(channel,"BULK
            INSERT Yada.dbo.yada
            FROM '\\\\<server-that-SQL-server-can-see>\\export\\filename.txt'
            WITH
            (
            FIELDTERMINATOR = ',',
            ROWTERMINATOR = '\\n'
            )");

但我的问题是我不能在事务之间保持我的数据处于静止状态(因为数据安全性而无法将数据写入文件),所以我正在寻找解决方案,如果我可以直接从内存或缓存中批量插入数据。谢谢您的帮助。

3 个答案:

答案 0 :(得分:1)

好问题 - 在无法以任何理由设置BULK INSERT权限的情况下也很有用。

当我有足够的数据sqlSave太慢但又不足以证明设置BULK INSERT时,我把这个可怜人的解决方案扔了一会儿,所以它不需要任何写入文件的数据。 sqlSave和参数化查询插入数据的速度非常慢的主要原因是每个行都插入了新的INSERT语句。在下面的示例中,让R手动写INSERT语句绕过这个:

library(RODBC)
channel <- ...
dataTable <- ...relevant data...
numberOfThousands <- floor(nrow(dataTable)/1000)
extra <- nrow(dataTable)%%1000

thousandInsertQuery <- function(channel,dat,range){
  sqlQuery(channel,paste0("INSERT INTO Database.dbo.Responses (IDNum,State,Answer)
                                  VALUES "
                          ,paste0(
                            sapply(range,function(k) {
                              paste0("(",dat$IDNum[k],",'",
                                     dat$State[k],"','",
                                     gsub("'","''",dat$Answer[k],fixed=TRUE),"')")
                            })                                         
                            ,collapse=",")))
}

if(numberOfThousands)
  for(n in 1:numberOfThousands)
  {
    thousandInsertQuery(channel,(1000*(n-1)+1):(1000*n),dataTable)
  }
if(extra)
  thousandInsertQuery(channel,(1000*numberOfThousands+1):(1000*numberOfThousands+extra))

使用值写出的SQL INSERT语句一次最多只能接受1000行,因此这段代码将其分解为块(比一次一行更有效)。 / p>

显然必须自定义thousandInsertQuery函数来处理数据框所具有的任何列 - 请注意字符/因子列周围有单引号和gsub来处理任何单引号可能在字符列中。除此之外,没有针对SQL注入攻击的安全措施。

答案 1 :(得分:0)

建立在@ jpd527解决方案的基础上,我发现它确实值得深入研究...

require(RODBC)
channel <- #connection parameters
dbPath <- # path to your table, database.table
data <- # the DF you have prepared for insertion, /!\ beware of column names and values types...

# Function to insert 1000 rows of data in one sqlQuery call, coming from
# any DF and into any database.table

insert1000Rows <- function(channel, dbPath, data, range){

    # Defines columns names for the database.table
    columns <- paste(names(data), collapse = ", ")

    # Initialize a string which will incorporate all 1000 rows of values
    values <- ""

    # Not very elegant, but appropriately builds the values (a, b, c...), (d, e, f...) into a string
    for (i in range) {
        for (j in 1:ncol(data)) {

            # First column
            if (j == 1) {

                if (i == min(range)) {
                    # First row, only "("
                    values <- paste0(values, "(")
                } else {
                    # Next rows, ",("
                    values <- paste0(values, ",(")
                }
            }

            # Value Handling
            values <- paste0(
                values

                # Handling NA values you want to insert as NULL values
                , ifelse(is.na(data[i, j])
                    , "null"

                    # Handling numeric values you want to insert as INT
                    , ifelse(is.numeric(data[i, j])
                        , data[i, J]

                        # Else handling as character to insert as VARCHAR
                        , paste0("'", data[i, j], "'")
                    )
                )
            )

            # Separator for columns
            if (j == ncol(data)) {

                # Last column, close parenthesis
                values <- paste0(values, ")")
            } else {

                # Other columns, add comma
                values <- paste0(values, ",")
            }
        }
    }

    # Once the string is built, insert it into SQL Server
    sqlQuery(channel,paste0("insert into ", dbPath, " (", columns, ") values ", values))
}

insert1000Rows函数在下一个函数sqlInsertAll的循环中使用,您只需为其定义要插入哪个DF到哪个database.table中即可。

# Main function which uses the insert1000rows function in a loop
sqlInsertAll <- function(channel, dbPath, data) {
    numberOfThousands <- floor(nrow(data) / 1000)
    extra <- nrow(data) %% 1000
    if (numberOfThousands) {
        for(n in 1:numberOfThousands) {
            insert1000Rows(channel, dbPath, data, (1000 * (n - 1) + 1):(1000 * n))
            print(paste0(n, "/", numberOfThousands))
        }
    }
    if (extra) {
        insert1000Rows(channel, dbPath, data, (1000 * numberOfThousands + 1):(1000 * numberOfThousands + extra))
    }
}

这样,我可以在5分钟左右的时间内插入25万行数据,而使用RODBC软件包中的sqlSave则花费了超过24小时。

答案 2 :(得分:-1)

使用DBI::dbWriteTable()功能怎么样? 下面的示例(我将我的R代码连接到AWS RDS的{​​{1}}实例):

MS SQL Express

对于传输的少量数据,它的工作速度非常快,如果你想要library(DBI) library(RJDBC) library(tidyverse) # Specify where you driver lives drv <- JDBC( "com.microsoft.sqlserver.jdbc.SQLServerDriver", "c:/R/SQL/sqljdbc42.jar") # Connect to AWS RDS instance conn <- drv %>% dbConnect( host = "jdbc:sqlserver://xxx.ccgqenhjdi18.ap-southeast-2.rds.amazonaws.com", user = "xxx", password = "********", port = 1433, dbname= "qlik") if(0) { # check what the conn object has access to queryResults <- conn %>% dbGetQuery("select * from information_schema.tables") } # Create test data example_data <- data.frame(animal=c("dog", "cat", "sea cucumber", "sea urchin"), feel=c("furry", "furry", "squishy", "spiny"), weight=c(45, 8, 1.1, 0.8)) # Works in 20ms in my case system.time( conn %>% dbWriteTable( "qlik.export.test", example_data ) ) # Let us see if we see the exported results conn %>% dbGetQuery("select * FROM qlik.export.test") # Let's clean the mess and force-close connection at the end of the process conn %>% dbDisconnect() - &gt;,它看起来相当优雅。 data.frame解决方案。

享受!