InvalidJobConfException。未设置输出目录

时间:2019-04-29 06:31:15

标签: scala apache-spark dataframe rdd google-cloud-bigtable

我正在尝试使用SparkSession

将一些数据写入bigtable
val spark = SparkSession
  .builder
  .config(conf)
  .appName("my-job")
  .getOrCreate()

val hadoopConf = spark.sparkContext.hadoopConfiguration

import spark.implicits._
case class BestSellerRecord(skuNbr: String, slsQty: String, slsDollar: String, dmaNbr: String, productId: String)

val seq: DataFrame = Seq(("foo", "1", "foo1"), ("bar", "2", "bar1")).toDF("key", "value1", "value2")

val bigtablePuts = seq.toDF.rdd.map((row: Row) => {
  val put = new Put(Bytes.toBytes(row.getString(0)))
  put.addColumn(Bytes.toBytes(columnFamily), Bytes.toBytes("nbr"), Bytes.toBytes(row.getString(0)))
  (new ImmutableBytesWritable(), put)
})

bigtablePuts.saveAsNewAPIHadoopDataset(hadoopConf)

但这给了我以下例外。

Exception in thread "main" org.apache.hadoop.mapred.InvalidJobConfException: Output directory not set.
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:138)
at org.apache.spark.internal.io.HadoopMapReduceWriteConfigUtil.assertConf(SparkHadoopWriter.scala:391)
at org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:71)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1083)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1081)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1081)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1081)

来自

bigtablePuts.saveAsNewAPIHadoopDataset(hadoopConf)

此行。我也尝试使用hadoopConf.set来设置不同的配置,例如conf.set("spark.hadoop.validateOutputSpecs", "false"),但这给了我NullPointerException

如何解决此问题?

1 个答案:

答案 0 :(得分:1)

由于已弃用mapred,您能否尝试升级到mapreduce api。

这里的问题显示了重写此代码段的示例:Output directory not set exception when save RDD to hbase with spark

希望这会有所帮助。