无法在hdfs中保存r输出

时间:2015-08-27 10:27:53

标签: r hadoop apache-spark sparkr

我正在运行sparkR程序。我想在hdfs中保存输出。输出完全保存在本地,但如果我提到hdfs路径意味着它会抛出错误。 我正在从shell脚本执行。这是我的shell脚本:

/SparkR-pkg/lib/SparkR/sparkR-submit --master yarn-client examples/pi.R yarn-client 4 

这是我的代码。

library(SparkR)
getwd()
setwd('hdfs://ip-172-31-41-199.us-wes t2.compute.internal:8020/user/karun/output/')
args <- commandArgs(trailing = TRUE)
if (length(args) < 1) {
print("Usage: pi <master> [<slices>]")
q("no")
}
sc <- sparkR.init(args[[1]], "PiR")
slices <- ifelse(length(args) > 1, as.integer(args[[2]]), 2)
n <- 100000 * slices
piFunc <- function(elem) {
rands <- runif(n = 2, min = -1, max = 1)
val <- ifelse((rands[1]^2 + rands[2]^2) < 1, 1.0, 0.0)
val
}
piFuncVec <- function(elems) {
message(length(elems))
rands1 <- runif(n = length(elems), min = -1, max = 1)
rands2 <- runif(n = length(elems), min = -1, max = 1)
val <- ifelse((rands1^2 + rands2^2) < 1, 1.0, 0.0)
sum(val)
}
rdd <- parallelize(sc, 1:n, slices)
count <- reduce(lapplyPartition(rdd, piFuncVec), sum)
output <- paste("Pi is roughly", 4.0 * count / n, "\n")
output <- paste(output, "Num elements in RDD ", count(rdd), "\n")
writeLines(output, con = "file.txt", sep = "\n", useBytes = FALSE)
cat("Num elements in RDD ", count(rdd), "\n")

我尝试了很多方法来保存hdfs链接接收器,write.data,writetype等中的输出。我正在尝试通过提及setwd()来更改工作目录。此查询也无法正常工作.it抛出错误

  

setwd错误(&#34; hdfs://ip-172-31-41-199.us-west- 2.compute.internal:8020 / user / karun / output /&#34;):      无法更改工作目录      执行暂停

我已经进行了2天的故障排除。非常感谢

0 个答案:

没有答案