org.apache.spark.sql.AnalysisException:无法在流式数据集/ DataFrame

时间:2018-10-31 14:59:53

标签: apache-spark spark-streaming spark-structured-streaming

我正在尝试将Spark结构化流(2.3)数据集写入ScyllaDB(Cassandra)。

我写数据集的代码:

  def saveStreamSinkProvider(ds: Dataset[InvoiceItemKafka]) = {
    ds
      .writeStream
      .format("cassandra.ScyllaSinkProvider")
      .outputMode(OutputMode.Append)
      .queryName("KafkaToCassandraStreamSinkProvider")
      .options(
        Map(
          "keyspace" -> namespace,
          "table" -> StreamProviderTableSink,
          "checkpointLocation" -> "/tmp/checkpoints"
        )
      )
      .start()
  }

我的ScyllaDB流接收器:

class ScyllaSinkProvider extends StreamSinkProvider {
  override def createSink(sqlContext: SQLContext,
                          parameters: Map[String, String],
                          partitionColumns: Seq[String],
                          outputMode: OutputMode): ScyllaSink =
    new ScyllaSink(parameters)
}

class ScyllaSink(parameters: Map[String, String]) extends Sink {
  override def addBatch(batchId: Long, data: DataFrame): Unit =
    data.write
            .cassandraFormat(
              parameters("table"),
              parameters("keyspace")
              //parameters("cluster")
            )
      .mode(SaveMode.Append)
      .save()
}

但是,当我运行这段代码时,我收到一个异常:

...
[error]       +- StreamingExecutionRelation KafkaSource[Subscribe[transactions_load]], [key#7, value#8, topic#9, partition#10, offset#11L, timestamp#12, timestampType#13]
[error]     at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:295)
[error]     at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
[error] Caused by: org.apache.spark.sql.AnalysisException: 'write' can not be called on streaming Dataset/DataFrame;
[error]     at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
[error]     at org.apache.spark.sql.Dataset.write(Dataset.scala:3103)
[error]     at cassandra.ScyllaSink.addBatch(CassandraDriver.scala:113)
[error]     at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$3$$anonfun$apply$16.apply(MicroBatchExecution.scala:477)
...

我也看到了类似的问题,但这是针对CosmosDB-Spark CosmosDB Sink: org.apache.spark.sql.AnalysisException: 'write' can not be called on streaming Dataset/DataFrame

1 个答案:

答案 0 :(得分:1)

您可以先将其转换为RDD,然后编写:

class ScyllaSink(parameters: Map[String, String]) extends Sink {    

  override def addBatch(batchId: Long, data: DataFrame): Unit = synchronized {
    val schema = data.schema
    // this ensures that the same query plan will be used
    val rdd: RDD[Row] = df.queryExecution.toRdd.mapPartitions { rows =>
      val converter = CatalystTypeConverters.createToScalaConverter(schema)
      rows.map(converter(_).asInstanceOf[Row])
    }

    // write the RDD to Cassandra 
  }
}