在Windows中配置对spark的快速支持

时间:2017-12-04 03:17:02

标签: java hadoop apache-spark snappy

我正在将snappy压缩的json文件导入spark rdd或dataset。但是我遇到了这个错误: java.lang.UnsatisfiedLinkError:org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z

我已设置以下配置:

SparkConf conf = new SparkConf()
            .setAppName("normal spark")
            .setMaster("local")
            .set("spark.io.compression.codec", "org.apache.spark.io.SnappyCompressionCodec")
            .set("spark.driver.extraLibraryPath","D:\\Downloads\\spark-2.2.0-bin-hadoop2.7\\spark-2.2.0-bin-hadoop2.7\\jars")
            .set("spark.driver.extraClassPath","D:\\Downloads\\spark-2.2.0-bin-hadoop2.7\\spark-2.2.0-bin-hadoop2.7\\jars")
            .set("spark.executor.extraLibraryPath","D:\\Downloads\\spark-2.2.0-bin-hadoop2.7\\spark-2.2.0-bin-hadoop2.7\\jars")
            .set("spark.executor.extraClassPath","D:\\Downloads\\spark-2.2.0-bin-hadoop2.7\\spark-2.2.0-bin-hadoop2.7\\jars")
            ;

其中D:\ Downloads \ spark-2.2.0-bin-hadoop2.7是我的spark解压缩路径,我可以找到snappy jar文件snappy-0.2.jar和snappy-java-1.1.2.6.jar in

  

d:\下载\火花2.2.0彬hadoop2.7 \火花2.2.0彬hadoop2.7 \罐\

然而,没有任何作用,甚至错误消息也不会改变。

我该如何解决?

0 个答案:

没有答案