通过使用并行处理和/或plyr / dplyr来提高gDistance功能的速度/使用?

时间:2014-12-19 00:33:52

标签: r plyr spatial sp

我为大约1000个人提供了许多位置。总数据集大约为250万,我的处理脚本大约需要20个小时才能运行。然而,现在我有2400万观察,我需要清理我的代码,如果可以,我可能会使用并行处理。

对于每个点,我想找到最接近的多边形(大多数点不在多边形中)和该多边形的距离。这些点主要是在海上观测,而多边形是最靠近这些点的沿海(美国)县。

我一直在使用rgeos包中的gDistance函数执行此操作,并且已经运行了一系列循环(我知道,我知道),以便分解我的每个人的处理。我花了很多时间试图弄清楚如何将其转换为plyr / dplyr语法,但却无法得到它。我的问题的一部分,我假设我的对象类是SpatialPoint和SpatialPoylgonDataFrames。

library(sp)
library(rgeos)
library(plyr)
#  Create SpatialPointsDataFrame
#  My actual dataset has 24 million observations
my.pts <- data.frame(LONGITUDE=c(-85.4,-84.7,-82.7,-82.7,-86.5,-88.9,-94.8,-83.9,-87.8,-82.8),
             LATITUDE=c(30.0,29.9,27.5,28.5,30.4,26.1,29.3,28.0,29.4,27.8),
             MYID=c(1,1,2,2,2,2,3,4,4,4),
             INDEX=1:10)
coordinates(my.pts) <- c("LONGITUDE","LATITUDE")

#  Create two polygons in a SpatialPolygonsDataFrame
#  My actual dataset has 71 polygons (U.S. counties)
x1 <- data.frame(x=c(-92.3, -92.3, -90.7, -90.7, -92.3, -92.3),y=c(27.6, 29.4, 29.4, 27.6, 27.6, 27.6))
x1 <- as.data.frame(x1) 
x1 <- Polygon(rbind(x1,x1[1,]))

x2 <- data.frame(x=c(-85.2, -85.2, -83.3, -83.2, -85.2, -85.2),y=c(26.4, 26.9, 26.9, 26.0, 26.4,     26.4))
x2 <- as.data.frame(x2) 
x2 <- Polygon(rbind(x2,x2[1,]))

poly1 <- Polygons(list(x1),"poly1")
poly2 <- Polygons(list(x2),"poly2")
myShp <- SpatialPolygons(list(poly1,poly2),1:2)
sdf <- data.frame(ID=c(1,2))
row.names(sdf) <- c("poly1","poly2")
 myShp <- SpatialPolygonsDataFrame(myShp,data=sdf)

   #  I have been outputting my results to a list. With this small sample, it's easy to just put everything into the object county.vec. But I worry that the 24 million x 71 object would not be feasible. The non-loop version shows the output I've been getting more easily.

    COUNTY.LIST <- list()
    county.vec <- gDistance(my.pts, myShp, byid=TRUE)
    COUNTY.LIST[[1]] = apply(county.vec, 2, min)
    COUNTY.LIST[[2]] = apply(county.vec, 2, which.min)
    COUNTY.LIST[[3]] = my.pts$INDEX

# I have been putting it into a loop so that county.vec gets dumped for each version of the loop.
# Seems like this could be done using dlply perhaps? And then I would have the power of parallel processing?
idx <- unique(my.pts$MYID)
COUNTY.LIST <- list()
for(i in 1:length(idx)){
    COUNTY.LIST[[i]] <- list()
    county.vec <- gDistance(my.pts[my.pts$MYID==idx[i],], myShp, byid=TRUE)
    COUNTY.LIST[[i]][[1]] = apply(county.vec, 2, min)
    COUNTY.LIST[[i]][[2]] = apply(county.vec, 2, which.min)
    COUNTY.LIST[[i]][[3]] = my.pts$MY[my.pts$MYID==idx[i]]
    rm(county.vec)
}

dlply(my.pts,.(MYID),gDistance(my.pts, myShp, byid=TRUE),.parallel=TRUE)
> dlply(my.pts,.(MYID),gDistance(my.pts, myShp, byid=TRUE))
Error in eval.quoted(.variables, data) : 
envir must be either NULL, a list, or an environment.

# I suspect this error is because my.pts is a SpatialPointsPolygon.  I also recognize that my function call probably isn't right, but first things first.

# I tried another way to reference the MYID field, more inline with treatment of S4 objects...
dlply(my.pts,my.pts@data$MYID,gDistance(my.pts, myShp, byid=TRUE),.parallel=TRUE)

# It yields the same error.

我很感激人们可能提出的任何建议。

1 个答案:

答案 0 :(得分:8)

这是一个老问题,但也许我的简单方法可以帮助其他人 它使用了平行线。我正在写一个一般的例子。它不会运行以前的数据问题。

app.post('/login', passport.authenticate('local'), function (req, res) {
    var userInfo = req.body;
});

我希望这会有所帮助。