使用Scala从HDFS读取文件并使用它创建RDD

时间:2018-04-27 15:48:22

标签: scala apache-spark hdfs

我正在尝试使用Scala在HDFS中加载一些文件。

但是,当我尝试加载它时,我遇到了同样的错误。

位置HDFS文件: hdfs/test/dir/text.txt

(我在/ dir中有更多文件)

我的代码:

// Spark Packages
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf

// Initializing Spark
val conf = new SparkConf().setAppName("training").setMaster("master")
new SparkContext(conf)

// Read files from HDFS and convert to RDD.
val rdd = sc.textFile("/test/dir/*")

我的错误:

18/04/29 05:44:30 INFO storage.MemoryStore: ensureFreeSpace(280219) called with curMem=301375, maxMem=257918238
18/04/29 05:44:30 INFO storage.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 273.7 KB, free 245.4 MB)
18/04/29 05:44:31 INFO storage.MemoryStore: ensureFreeSpace(21204) called with curMem=581594, maxMem=257918238
18/04/29 05:44:31 INFO storage.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 20.7 KB, free 245.4 MB)
18/04/29 05:44:31 ERROR actor.OneForOneStrategy: 
java.lang.NullPointerException
    at org.apache.spark.storage.BlockManagerMasterActor.org$apache$spark$storage$BlockManagerMasterActor$$updateBlockInfo(BlockManagerMasterActor.scala:359)
    at org.apache.spark.storage.BlockManagerMasterActor$$anonfun$receiveWithLogging$1.applyOrElse(BlockManagerMasterActor.scala:75)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply$mcVL$sp(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:33)
    at scala.runtime.AbstractPartialFunction$mcVL$sp.apply(AbstractPartialFunction.scala:25)
    at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:53)
    at org.apache.spark.util.ActorLogReceive$$anon$1.apply(ActorLogReceive.scala:42)
    at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:118)
    at 

以及更多......

我该如何解决这个问题?或者是因为我的语法错了?

非常感谢你。

1 个答案:

答案 0 :(得分:0)

删除以下内容后,我就可以运行代码了:

{{1}}