我有以下代码,我想使用Spark 2.4结构化流 foreachBatch
来写到cassandra
Dataset<Row> df = spark
.readStream()
.format("kafka")
.option("kafka.bootstrap.servers", "localhost:9092")
.option("subscribe", "topic1")
.load();
Dataset<Row> values=df.selectExpr(
"split(value,',')[0] as field1",
"split(value,',')[1] as field2",
"split(value,',')[2] as field3",
"split(value,',')[3] as field4",
"split(value,',')[4] as field5");
//TODO write into cassandra
values.writeStream().foreachBatch(
new VoidFunction2<Dataset<String>, Long> {
public void call(Dataset<String> dataset, Long batchId) {
// Transform and write batchDF
}
).start();
答案 0 :(得分:0)
使用.forEachBatch
时,您的代码就像使用普通数据集一样工作……在Java中,代码可能如下所示(完整源代码为here):
.foreachBatch((VoidFunction2<Dataset<Row>, Long>) (df, batchId) ->
df.write()
.format("org.apache.spark.sql.cassandra")
.options(ImmutableMap.of("table", "sttest", "keyspace", "test"))
.mode(SaveMode.Append)
.save()
)
答案 1 :(得分:-1)
尝试将其添加到您的pom.xml:
<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector_2.11</artifactId>
<version>2.4.2</version>
</dependency>
在导入cassandra隐式之后:
import org.apache.spark.sql.cassandra._
您可以在df上使用cassandraFormat方法:
dataset
.write
.cassandraFormat("table","keyspace")
.save()