我正在使用Scala编写的Spark中运行teragen程序的修改版本。我试图使用saveAsNewAPIHadoopFile()函数保存输出文件。相关代码如下:
dataset.map(row => (NullWritable.get(), new BytesWritable(row))).saveAsNewAPIHadoopFile(output)
代码正在成功编译。但是,运行它时,我收到以下错误:
Exception in thread "main" java.lang.RuntimeException: class scala.runtime.Nothing$ not org.apache.hadoop.mapreduce.OutputFormat
at org.apache.hadoop.conf.Configuration.setClass(Configuration.java:1794)
at org.apache.hadoop.mapreduce.Job.setOutputFormatClass(Job.java:823)
at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:830)
at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:811)
at GenSort$.main(GenSort.scala:52)
at GenSort.main(GenSort.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
有没有办法让它与saveAsNewAPIHadoopFile()一起使用?我会很高兴得到任何帮助。
答案 0 :(得分:1)
saveAsNewAPIHadoopFile(path: String,suffix: String,
keyClass: Class[_],
valueClass: Class[_],
outputFormatClass: Class[_ <: org.apache.hadoop.mapreduce.OutputFormat[_, _]])
期望键,值,格式化类。
方法签名是:
dataset.map(row => (NullWritable.get(), new BytesWritable(row))).saveAsNewAPIHadoopFile("hdfs:\\.....","<suffix>",classOf[NullWritable],classOf[BytesWritable],classOf[org.apache.hadoop.mapreduce.lib.output.TextOutputFormat[NullWritable, BytesWritable]]))
实施应该是:
dataset.map(row => (NullWritable.get(), new BytesWritable(row))).
saveAsNewAPIHadoopFile("hdfs:\\.....","<suffix>",
new NullWritable().getClass,new BytesWritable.getClass,
new org.apache.hadoop.mapreduce.lib.output.TextOutputFormat[NullWritable, BytesWritable].getClass))
或
{{1}}