从HDFS -Spark Scala加载数据

时间:2016-12-23 15:55:19

标签: scala apache-spark hdfs

我有一个SBT的自包含应用程序,我想从HDFS加载我的数据,我使用了这个命令:

val loadfiles1 = sc.textFile("hdfs:///tmp/MySimpleProject/file1.dat")

但错误就是这样:

[error] (run-main-0) java.io.IOException: Incomplete HDFS URI, no host: hdfs:/tmp/MyProjectSpark/file1.dat
java.io.IOException: Incomplete HDFS URI, no host: hdfs:/tmp/MyProjectSpark/file1.dat
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:133)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2433)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:287)
        at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:221)
        at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:270)
        at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
        at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
        at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1930)
        at org.apache.spark.rdd.RDD.count(RDD.scala:1134)
        at app$.main(App.scala:33)
        at app.main(App.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
[trace] Stack trace suppressed: run last compile:run for the full output.
16/12/23 15:19:16 ERROR ContextCleaner: Error in cleaning thread
java.lang.InterruptedException
        at java.lang.Object.wait(Native Method)
        at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
        at org.apache.spark.ContextCleaner$$anonfun$org$apache$spark$ContextCleaner$$keepCleaning$1.apply$mcV$sp(ContextCleaner.scala:175)
        at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1249)
        at org.apache.spark.ContextCleaner.org$apache$spark$ContextCleaner$$keepCleaning(ContextCleaner.scala:172)
        at org.apache.spark.ContextCleaner$$anon$1.run(ContextCleaner.scala:67)
16/12/23 15:19:16 ERROR Utils: uncaught error in thread SparkListenerBus, stopping SparkContext
java.lang.InterruptedException
        at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:996)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1303)
        at java.util.concurrent.Semaphore.acquire(Semaphore.java:317)
        at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(LiveListenerBus.scala:80)
        at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
        at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(LiveListenerBus.scala:79)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
        at org.apache.spark.scheduler.LiveListenerBus$$anon$1$$anonfun$run$1.apply$mcV$sp(LiveListenerBus.scala:78)
        at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1249)
        at org.apache.spark.scheduler.LiveListenerBus$$anon$1.run(LiveListenerBus.scala:77)
16/12/23 15:19:16 INFO SparkUI: Stopped Spark web UI at http://10.0.2.15:4040
java.lang.RuntimeException: Nonzero exit code: 1
        at scala.sys.package$.error(package.scala:27)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error] (compile:run) Nonzero exit code: 1
[error] Total time: 10 s, completed Dec 23, 2016 3:19:17 PM
16/12/23 15:19:17 INFO DiskBlockManager: Shutdown hook called
16/12/23 15:19:17 INFO ShutdownHookManager: Shutdown hook called
16/12/23 15:19:17 INFO ShutdownHookManager: Deleting directory /tmp/spark-515b242b-7450-4215-9831-8e6976cb41ba
16/12/23 15:19:17 INFO ShutdownHookManager: Deleting directory /tmp/spark-515b242b-7450-4215-9831-8e6976cb41ba/userFiles-ee18e822-55c7-4613-b3f7-03e5a4c896e1

为什么所有这些错误,只是我想从HDFS加载文件。 spark上下文的配置如下:

val conf = new SparkConf().setAppName("My first project hadoop spark").setMaster("local[4]")
val sc = new SparkContext(conf)

文件site-core.xml中的hdfs配置如下:

<property>
      <name>fs.defaultFS</name>
      <value>hdfs://sandbox.hortonworks.com:8020</value>
      <final>true</final>
    </property>

谢谢。

1 个答案:

答案 0 :(得分:1)

Stacktrace清楚地说明了

  

不完整的HDFS URI,没有主机:hdfs:/tmp/MyProjectSpark/file1.dat

请指定hdfs namenode主机和可选端口(默认为8020,指定是否不同)。

这样的事情(假设localhost是你的名字节点):

  

HDFS://本地主机:8020 / TMP / MyProjectSpark / file1.dat