尝试使用Spark结构化流媒体将数据写入Kafka主题并出现以下错误。
aggregatedDataset
.select(to_json(struct("*")).as("value"))
.writeStream()
.outputMode(OutputMode.Append())
.option("kafka.bootstrap.servers", kafkaBootstrapServersString)
.option("topic", topic)
.option("checkpointLocation", checkpointLocation)
.start();
Stacktrace:
Exception in thread "main" java.lang.IllegalArgumentException: 'path' is not specified
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$11.apply(DataSource.scala:276)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$11.apply(DataSource.scala:276)
at scala.collection.MapLike$class.getOrElse(MapLike.scala:128)
at org.apache.spark.sql.catalyst.util.CaseInsensitiveMap.getOrElse(CaseInsensitiveMap.scala:28)
at org.apache.spark.sql.execution.datasources.DataSource.createSink(DataSource.scala:275)
at org.apache.spark.sql.streaming.DataStreamWriter.start(DataStreamWriter.scala:286)
答案 0 :(得分:2)
在writeStream部分中缺少格式,在您的情况下,该格式似乎是kafka,
aggregatedDataset
...
.writeStream
.format("kafka")
...
希望这会有所帮助!