无法使用spark RDD API

时间:2017-04-07 06:50:20

标签: apache-spark

我使用以下代码将RDD写为序列文件

  @Test
  def testSparkWordCount(): Unit = {
    val words = Array("Hello", "Hello", "World", "Hello", "Welcome", "World")
    val conf = new SparkConf().setMaster("local").setAppName("testSparkWordCount")
    val sc = new SparkContext(conf)

    val dir = "file:///" + System.currentTimeMillis()
    sc.parallelize(words).map(x => (x, 1)).saveAsHadoopFile(
      dir,
      classOf[Text],
      classOf[IntWritable],
      classOf[org.apache.hadoop.mapred.SequenceFileOutputFormat[Text, IntWritable]]
    )

    sc.stop()
  }

当我运行它时,它会抱怨

Caused by: java.io.IOException: wrong key class: java.lang.String is not class org.apache.hadoop.io.Text
    at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1373)
    at org.apache.hadoop.mapred.SequenceFileOutputFormat$1.write(SequenceFileOutputFormat.java:76)
    at org.apache.spark.internal.io.SparkHadoopWriter.write(SparkHadoopWriter.scala:94)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply$mcV$sp(PairRDDFunctions.scala:1139)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1137)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$12$$anonfun$apply$4.apply(PairRDDFunctions.scala:1137)
    at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1360)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1145)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1125)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)

我是否必须使用sc.parallelize(words).map(x => (new Text(x), new IntWritable(1))代替sc.parallelize(words).map(x => (x, 1))?我不认为我必须明确地将它包装起来,因为SparkContext已经提供了将premitive类型包装到它们相应的Writable中的含义。

那么,我该怎么做才能使这段代码工作

1 个答案:

答案 0 :(得分:1)

是的,SparkContext提供了转换的含义。但是在保存期间不应用此转换,必须以通常的Scala方式使用:

import org.apache.spark.SparkContext._
val mapperFunction: String=> (Text,IntWritable) = x => (x, 1)
... parallelize(words).map(mapperFunction).saveAsHadoopFile ...