Spark - Scala - saveAsHadoopFile抛出错误

时间:2014-09-23 13:47:17

标签: scala apache-spark

我想解决问题,但无法继续前进。任何人都可以请帮助

import org.apache.hadoop.mapred.lib.MultipleTextOutputFormat

class KeyBasedOutput[T >: Null, V <: AnyRef] extends MultipleTextOutputFormat[T , V] {
override def generateFileNameForKeyValue(key: T, value: V, leaf: String) = {
key.toString
}
override def generateActualKey(key: T, value: V) = {
 null
}
}

val cp1 =sqlContext.sql("select * from d_prev_fact").map(t => t.mkString("\t")).map{x => val parts =      x.split("\t") 
    val partition_key = parts(3)
    val rows = parts.slice(0, parts.length).mkString("\t") 
   ("date=" + partition_key.toString, rows.toString)}

cp1.saveAsHadoopFile(FACT_CP)

我遇到如下错误,无法调试

scala> cp1.saveAsHadoopFile(FACT_CP,classOf[String],classOf[String],classOf[KeyBasedOutput[String, String]])
java.lang.RuntimeException: java.lang.NoSuchMethodException: $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$KeyBasedOutput.<init>()
    at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
    at org.apache.hadoop.mapred.JobConf.getOutputFormat(JobConf.java:709)
    at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:742)
    at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:674)

我们的想法是根据Key

将值写入多个文件夹

2 个答案:

答案 0 :(得分:1)

将KeyBasedOutput放入jar并启动spark-shell --jars / path / to / the / jar

答案 1 :(得分:0)

我不确定,但我认为类型擦除与反射相结合可能会导致这个问题。尝试定义一个非通用的KeyBasedOutput子类,它对类型参数进行硬编码并使用它。

class StringKeyBasedOutput extends KeyBasedOutput[String, String]