无法在Windows 7中获得SparkR会话(Java超时错误)

时间:2016-12-01 12:58:32

标签: r apache-spark sparkr

我安装了Spark,Java安装,路径正确设置。然而,它不起作用,并给我一个Java超时错误。

有没有人让这个工作?我错过了什么?

Sys.getenv('SPARK_HOME')
[1] "C:/Program Files/spark-2.0.1-bin-hadoop2.7"

Sys.getenv('JAVA_HOME')
[1] "C:\\Java\\jdk1.8.0_74"

.libPaths()
[1] "C:/Program Files/spark-2.0.1-bin-hadoop2.7/R/lib"
[2] "C:/Program Files/R/R-3.3.1/library"              

library(SparkR)

Attaching package: ‘SparkR’

The following objects are masked from ‘package:stats’:

    cov, filter, lag, na.omit, predict, sd, var, window

The following objects are masked from ‘package:base’:

    as.data.frame, colnames, colnames<-, drop, endsWith, intersect, rank,
    rbind, sample, startsWith, subset, summary, transform, union

sparkR.session(master = 'local')
Spark package found in SPARK_HOME: C:/Program Files/spark-2.0.1-bin-hadoop2.7
Launching java with spark-submit command C:/Program Files/spark-2.0.1-bin-hadoop2.7/bin/spark-submit2.cmd   sparkr-shell C:\Users\BLAHUSER~1\AppData\Local\Temp\RtmpiGe4EB\backend_port11587c2522e 
Error in sparkR.sparkContext(master, appName, sparkHome, sparkConfigMap,  : 
  JVM is not ready after 10 seconds

更新/编辑:

我终于设法让Spark和sparkR.session()在执行以下操作后工作:

1)将Spark安装从“Program Files”文件夹移动到顶级驱动器文件夹(显然文件夹名称中的空格是Windows使用的问题)。

2)在C:\ winutils \ bin中安装winutils.exe。

3)将HADOOP_HOME设置为'C:\ winutils \'。

4)将C:\ winutils \ bin添加到路径中,并将Spark bin添加到路径中。

但是,现在我无法在Spark中创建数据框。得到此错误。有什么想法吗?

> sparkR.session(master = 'local[*]')
Spark package found in SPARK_HOME: C:\spark-2.0.1-bin-hadoop2.7
Launching java with spark-submit command C:\spark-2.0.1-bin-hadoop2.7/bin/spark-submit2.cmd   sparkr-shell C:\Users\TUMULU~1\AppData\Local\Temp\RtmpiyYUM7\backend_port1b302ee5515 
Java ref type org.apache.spark.sql.SparkSession id 1 
> 
> df <- as.DataFrame(faithful)
Error in invokeJava(isStatic = TRUE, className, methodName, ...) : 
  java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
    at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:258)
    at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:359)
    at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:263)
    at org.apache.spark.sql.hive.HiveSharedState.metadataHive$lzycompute(HiveSharedState.scala:39)
    at org.apache.spark.sql.hive.HiveSharedState.metadataHive(HiveSharedState.scala:38)
    at org.apache.spark.sql.hive.HiveSharedState.externalCatalog$lzycompute(HiveSharedState.scala:46)
    at org.apache.spark.sql.hive.HiveSharedSt

0 个答案:

没有答案