我正在尝试抓取大量网页以便稍后分析它们。由于网址数量很大,我决定使用parallel
包和XML
。
具体来说,我正在使用htmlParse()
中的XML
函数,该函数与sapply
一起使用时效果很好,但在与parSapply
一起使用时会生成类HTMLInternalDocument的空对象。
url1<- "http://forums.philosophyforums.com/threads/senses-of-truth-63636.html"
url2<- "http://forums.philosophyforums.com/threads/the-limits-of-my-language-impossibly-mean-the-limits-of-my-world-62183.html"
url3<- "http://forums.philosophyforums.com/threads/how-language-models-reality-63487.html"
myFunction<- function(x){
cl<- makeCluster(getOption("cl.cores",detectCores()))
ok<- parSapply(cl=cl,X=x,FUN=htmlParse)
return(ok)
}
urls<- c(url1,url2,url3)
#Works
output1<- sapply(urls,function(x)htmlParse(x))
str(output1[[1]])
> Classes 'HTMLInternalDocument', 'HTMLInternalDocument', 'XMLInternalDocument', 'XMLAbstractDocument', 'oldClass' <externalptr>
output1[[1]]
#Doesn't work
myFunction<- function(x){
cl<- makeCluster(getOption("cl.cores",detectCores()))
ok<- parSapply(cl=cl,X=x,FUN=htmlParse)
stopCluster(cl)
return(ok)
}
output2<- myFunction(urls)
str(output2[[1]])
> Classes 'HTMLInternalDocument', 'HTMLInternalDocument', 'XMLInternalDocument', 'XMLAbstractDocument', 'oldClass' <externalptr>
output2[[1]]
#empty
感谢。
答案 0 :(得分:11)
您可以使用Rcurl包中的getURIAsynchronous
,允许调用者指定多个URI同时下载。
library(RCurl)
library(XML)
get.asynch <- function(urls){
txt <- getURIAsynchronous(urls)
## this part can be easily parallelized
## I am juste using lapply here as first attempt
res <- lapply(txt,function(x){
doc <- htmlParse(x,asText=TRUE)
xpathSApply(doc,"/html/body/h2[2]",xmlValue)
})
}
get.synch <- function(urls){
lapply(urls,function(x){
doc <- htmlParse(x)
res2 <- xpathSApply(doc,"/html/body/h2[2]",xmlValue)
res2
})}
这里有100个网址的基准测试,你将解析时间除以2倍。
library(microbenchmark)
uris = c("http://www.omegahat.org/RCurl/index.html")
urls <- replicate(100,uris)
microbenchmark(get.asynch(urls),get.synch(urls),times=1)
Unit: seconds
expr min lq median uq max neval
get.asynch(urls) 22.53783 22.53783 22.53783 22.53783 22.53783 1
get.synch(urls) 39.50615 39.50615 39.50615 39.50615 39.50615 1