以更快的方式计算欧氏距离

时间:2016-09-24 18:33:22

标签: r performance distance missing-data euclidean-distance

我想用30.000个观测值来计算数据帧行之间的欧氏距离。执行此操作的简单方法是dist函数(例如,dist(数据))。但是,由于我的数据帧很大,这需要花费太多时间。

某些行包含缺失值。我不需要行之间的距离,两行都包含缺失值,或行之间的距离,其中没有行包含缺失值。

在for循环中,我试图排除我不需要的组合。不幸的是,我的解决方案需要更多时间:

# Some example data
data <- data.frame(
  x1 = c(1, 22, NA, NA, 15, 7, 10, 8, NA, 5),
  x2 = c(11, 2, 7, 15, 1, 17, 11, 18, 5, 5),
  x3 = c(21, 5, 6, NA, 10, 22, 12, 2, 12, 3),
  x4 = c(13, NA, NA, 20, 12, 5, 1, 8, 7, 14)
)


# Measure speed of dist() function
start_time_dist <- Sys.time()

# Calculate euclidean distance with dist() function for complete dataset
dist_results <- dist(data)

end_time_dist <- Sys.time()
time_taken_dist <- end_time_dist - start_time_dist


# Measure speed of my own loop
start_time_own <- Sys.time()

# Calculate euclidean distance with my own loop only for specific cases

# # # 
# The following code should be faster!
# # # 

data_cc <- data[complete.cases(data), ]
data_miss <- data[complete.cases(data) == FALSE, ]

distance_list <- list()

for(i in 1:nrow(data_miss)) {

  distances <- numeric()
  for(j in 1:nrow(data_cc)) {
    distances <- c(distances, dist(rbind(data_miss[i, ], data_cc[j, ]), method = "euclidean"))
  }

  distance_list[[i]] <- distances
}

end_time_own <- Sys.time()
time_taken_own <- end_time_own - start_time_own


# Compare speed of both calculations
time_taken_dist # 0.002001047 secs
time_taken_own # 0.01562881 secs

我能以更快的方式计算出我需要的欧氏距离吗?

1 个答案:

答案 0 :(得分:4)

我建议你使用并行计算。将所有代码放在一个函数中并并行执行。

默认情况下,R将在一个线程中进行所有计算。您应该手动添加并行线程。在R中启动集群需要时间,但如果您拥有大型数据框,则主作业的性能将是(your_processors_number-1)倍。

此链接也可能有所帮助:How-to go parallel in R – basics + tipsA gentle introduction to parallel computing in R

好的选择是将你的工作分成较小的包,并在每个帖子中单独计算。仅创建一次线程,因为它在R中很耗时。

library(parallel)
library(foreach)
library(doParallel)
# I am not sure that all libraries are here
# try ??your function to determine which library do you need
# determine how many processors has your computer
no_cores <- detectCores() - 1# one processor must be free always for system
start.t.total<-Sys.time()
print(start.t.total)
startt<-Sys.time()
print(startt)
#start parallel calculations
cl<-makeCluster(no_cores,outfile = "mycalculation_debug.txt")
registerDoParallel(cl)
# results will be in out.df class(dataframe)
out.df<-foreach(p=1:no_cores
                    ,.combine=rbind # data from different threads will be in one table
                    ,.packages=c()# All packages that your funtion is using must be called here
                    ,.inorder=T) %dopar% #don`t forget this directive
                    {
                      tryCatch({
                          #
                          # enter your function here and do what you want in parallel
                          #
                          print(startt-Sys.time())
                          print(start.t.total-Sys.time())
                          print(paste(date,'packet',p, percent((x-istart)/packes[p]),'done'))
                        }
                        out.df
                      },error = function(e) return(paste0("The variable '", p, "'", 
                                                          " caused the error: '", e, "'")))
                    }
stopCluster(cl)
gc()# force to free memory from killed processes