Spark结构化流代码时出现以下异常
18/12/05 15:00:38错误StreamExecution:查询[id = 48ec92a0-811a-4d57-a65d-c0b9c754e093,runId = 5e2adff4-855e-46c6-8592-05e3557544c6]终止,并出现错误 java.lang.ClassCastException:org.apache.spark.sql.execution.streaming.SerializedOffset无法转换为org.apache.spark.sql.execution.streaming.LongOffset 在org.apache.bahir.sql.streaming.mqtt.MQTTTextStreamSource.getBatch(MQTTStreamSource.scala:152) 在 org.apache.spark.sql.execution.streaming.StreamExecution $$ anonfun $ org $ apache $ spark $ sql $ execution $ streaming $ StreamExecution $$ runBatch $ 2 $$ anonfun $ apply $ 7.apply(StreamExecution.scala:614)< / p>
每次启动查询时都会发生此异常。在删除检查点后启动它时,它确实起作用。
Spark结构化流代码如下,基本上我只是从MQTT队列中读取数据并写入ElasticSearch索引。
spark
.readStream
.format("org.apache.bahir.sql.streaming.mqtt.MQTTStreamSourceProvider")
.option("topic", "Employee")
.option("username", "username")
.option("password", "password")
.option("clientId", "employee11")
.load("tcp://localhost:8000")
.as[(String, Timestamp)]
.writeStream
.outputMode("append")
.format("es")
.option("es.resource", "spark/employee")
.option("es.nodes", "localhost")
.option("es.port", 9200)
.start()
.awaitTermination()
以下是所使用的依赖项。我使用MapR发行版。
"org.apache.spark" %% "spark-core" % "2.2.1-mapr-1803",
"org.apache.spark" %% "spark-sql" % "2.2.1-mapr-1803",
"org.apache.spark" %% "spark-streaming" % "2.2.1-mapr-1803",
"org.apache.bahir" %% "spark-sql-streaming-mqtt" % "2.2.1",
"org.apache.bahir" %% "spark-streaming-mqtt" % "2.2.1",
"org.elasticsearch" %% "elasticsearch-spark-20" % "6.3.2"
Spark-submit命令
/opt/mapr/spark/spark-2.2.1/bin/spark-submit \
--master yarn \
--deploy-mode cluster \
--jars spark-sql-streaming-mqtt_2.11-2.2.1.jar,org.eclipse.paho.client.mqttv3-1.1.0.jar,elasticsearch-spark-20_2.11-6.3.2.jar,mail-1.4.7.jar myjar_2.11-0.1.jar \
--class <MAIN_CLASS>
对此将有任何帮助。