写入hdfs的Spark不使用saveAsNewAPIHadoopFile方法

时间:2014-11-22 01:04:22

标签: hadoop hdfs apache-spark cloudera

我在CDH 5.2.0上使用Spark 1.1.0并试图确保我可以读取和写入hdfs。

我很快意识到.textFile和.saveAsTextFile调用了旧的api,似乎与我们的hdfs版本不兼容。

  def testHDFSReadOld(sc: SparkContext, readFile: String){
    //THIS WILL FAIL WITH
    //(TID 0, dl1rhd416.internal.edmunds.com): java.lang.IllegalStateException: unread block data
    //java.io.ObjectInputStream$BlockDataInputStream.setBlockDataMode(ObjectInputStream.java:2420)

    sc.textFile(readFile).take(2).foreach(println)
  }

  def testHDFSWriteOld(sc: SparkContext, writeFile: String){
    //THIS WILL FAIL WITH
    //(TID 0, dl1rhd416.internal.edmunds.com): java.lang.IllegalStateException: unread block data
    //java.io.ObjectInputStream$BlockDataInputStream.setBlockDataMode(ObjectInputStream.java:2420)

    sc.parallelize(List("THIS","ISCOOL")).saveAsTextFile(writeFile)
  }

转移到固定读取hdfs的新API方法!

  def testHDFSReadNew(sc: SparkContext, readFile: String){
    //THIS WORKS
    sc.newAPIHadoopFile(readFile, classOf[TextInputFormat], classOf[LongWritable],
      classOf[Text],sc.hadoopConfiguration).map{
      case (x:LongWritable, y: Text) => y.toString
    }.take(2).foreach(println)
  }

所以我似乎在取得进步。写作不再像上面那样因出现硬错误而退出,而是看起来似乎有效。唯一的问题是,除了目录中一个孤独的SUCCESS标志文件,什么都没有。更令人费解的是,日志显示数据正被写入_temporary目录。看起来好像文件提交者从未意识到需要将文件从_temporary目录移动到输出目录。

  def testHDFSWriteNew(sc: SparkContext, writeFile: String){
    /*This will have an error message of:
    INFO ConnectionManager: Removing SendingConnection to ConnectionManagerId(dl1rhd400.internal.edmunds.com,35927)
    14/11/21 02:02:27 INFO ConnectionManager: Key not valid ? sun.nio.ch.SelectionKeyImpl@2281f1b2
      14/11/21 02:02:27 INFO ConnectionManager: key already cancelled ? sun.nio.ch.SelectionKeyImpl@2281f1b2
      java.nio.channels.CancelledKeyException
    at org.apache.spark.network.ConnectionManager.run(ConnectionManager.scala:386)
    at org.apache.spark.network.ConnectionManager$$anon$4.run(ConnectionManager.scala:139)

    However lately it hasn't even had errors, symptoms are no part files in the directory but a success flag is there
    */
    val conf = sc.hadoopConfiguration
    conf.set("mapreduce.task.files.preserve.failedtasks", "true")
    conf.set("mapred.output.dir", writeFile)
    sc.parallelize(List("THIS","ISCOOL")).map(x => (NullWritable.get, new Text(x)))
      .saveAsNewAPIHadoopFile(writeFile, classOf[NullWritable], classOf[Text], classOf[TextOutputFormat[NullWritable, Text]], conf)

  }

当我在本地运行并指定hdfs路径时,文件在hdfs中显示正常。这只发生在我在spark独立群集上运行时。

我提交的工作如下: spark-submit --deploy-mode client --master spark:// sparkmaster --class driverclass driverjar

1 个答案:

答案 0 :(得分:0)

您可以尝试使用以下代码吗?

import org.apache.hadoop.io._
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat
val nums = sc.makeRDD(1 to 3).map(x => (new IntWritable(x), new Text("a" * x)))
nums.saveAsNewAPIHadoopFile[TextOutputFormat[IntWritable, Text]]("/data/newAPIHadoopFile")

以下代码也适用于我。

val x = sc.parallelize(List("THIS","ISCOOL")).map(x => (NullWritable.get, new Text(x)))
x.saveAsNewAPIHadoopFile("/data/nullwritable", classOf[NullWritable], classOf[Text], classOf[TextOutputFormat[NullWritable, Text]], sc.hadoopConfiguration)

[root @ sparkmaster~] #hadoop fs -cat / data / nullwritable / *

15/08/20 02:09:19 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable