我目前正在R中尝试并行计算 我正在尝试训练一个后勤脊模型,我目前在我的计算机上有4个核心。我想将我的数据集平均分成4个部分,并使用每个核心来训练模型(在训练数据上)并将每个核心的结果保存到单个矢量中。问题是我不知道如何做到这一点,现在我尝试与foreach包并行,但问题是每个核心都看到相同的训练数据。这是带有foreach包的代码(它不会拆分数据):
library(ridge)
library(parallel)
library(foreach)
num_of_cores <- detectCores()
mydata <- read.csv("http://www.ats.ucla.edu/stat/data/binary.csv")
data_per_core <- floor(nrow(mydata)/num_of_cores)
result <- data.frame()
r <- foreach(icount(4), .combine = cbind) %dopar% {
result <- logisticRidge(admit~ gre + gpa + rank,data = mydata)
coefficients(result)
}
任何想法如何同时将数据分成x个块并并行训练模型?
答案 0 :(得分:3)
itertools
包提供了许多函数,用于使用foreach循环迭代各种数据结构。在这种情况下,您可以使用isplitRows
函数将数据框逐行拆分为每个工作者一个块:
library(ridge)
library(doParallel)
library(itertools)
num_of_cores <- detectCores()
cl <- makePSOCKcluster(num_of_cores)
registerDoParallel(cl)
mydata <- read.csv("http://www.ats.ucla.edu/stat/data/binary.csv")
r <- foreach(d=isplitRows(mydata, chunks=num_of_cores),
.combine = cbind, .packages="ridge") %dopar% {
result <- logisticRidge(admit~ gre + gpa + rank, data = d)
coefficients(result)
}
如果你想控制每个块的最大大小, isplitRows
也会带一个chunkSize
参数。
请注意,使用此技术,每个工作人员只会收到mydata
的适当分数。这对于具有PSOCK
群集的较大数据帧尤为重要。
答案 1 :(得分:2)
这样的事情怎么样?它使用snowfall
代替foreach
- 库,但应该给出相同的结果。
library(snowfall)
library(ridge)
# for reproducability
set.seed(123)
num_of_cores <- parallel::detectCores()
mydata <- read.csv("http://www.ats.ucla.edu/stat/data/binary.csv")
data_per_core <- floor(nrow(mydata)/num_of_cores)
# we take random rows to each cluster, by sampleid
mydata$sampleid <- sample(1:num_of_cores, nrow(mydata), replace = T)
# create a small function that calculates the coefficients
regfun <- function(dat) {
library(ridge) # this has to be in the function, otherwise snowfall doesnt know the logistic ridge function
result <- logisticRidge(admit~ gre + gpa + rank, data = dat)
coefs <- as.numeric(coefficients(result))
return(coefs)
}
# prepare the data
datlist <- lapply(1:num_of_cores, function(i){
dat <- mydata[mydata$sampleid == i, ]
})
# initiate the clusters
sfInit(parallel = T, cpus = num_of_cores)
# export the function and the data to the cluster
sfExport("regfun")
# calculate, (sfClusterApply is very similar to sapply)
res <- sfClusterApply(datlist, function(datlist.element) {
regfun(dat = datlist.element)
})
#stop the cluster
sfStop()
# convert the list to a data.frame. data.table::rbindlist(list(res)) does the same job
res <- data.frame(t(matrix(unlist(res), ncol = num_of_cores)))
names(res) <- c("intercept", "gre", "gpa", "rank")
res
# res
# intercept gre
# 1 -3.002592 1.558363e-03
# 2 -4.142939 1.060692e-03
# 3 -2.967130 2.315487e-03
# 4 -1.176943 4.786894e-05
# gpa rank
# 1 0.7048146997 -0.382462408
# 2 0.9978841880 -0.314589628
# 3 0.6797382218 -0.464219036
# 4 -0.0004576679 -0.007618317