在从Spark编写有关主题的流时,我遇到了一个问题。
import org.apache.spark.sql.types._
val mySchema = StructType(Array(
StructField("ID", IntegerType),
StructField("ACCOUNT_NUMBER", StringType)
))
val streamingDataFrame = spark.readStream.schema(mySchema).option("delimiter",",")
.csv("file:///opt/files")
streamingDataFrame.selectExpr("CAST(id AS STRING) AS key", "to_json(struct(*)) AS value")
.writeStream.format("kafka")
.option("topic", "testing")
.option("kafka.bootstrap.servers", "10.55.55.55:9092")
.option("checkpointLocation", "file:///opt/")
.start().awaitTermination()
错误:
2018-09-12 11:09:04,344 ERROR executor.Executor: Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.kafka.common.errors.TimeoutException: Expiring 38 record(s) for testing-0: 30016 ms has passed since batch creation plus linger time
2018-09-12 11:09:04,358 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): org.apache.kafka.common.errors.TimeoutException: Expiring 38 record(s) for testing-0: 30016 ms has passed since batch creation plus linger time
2018-09-12 11:09:04,359 ERROR scheduler.TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
2018-09-12 11:09:04,370 ERROR streaming.StreamExecution: Query [id = 866e4416-138a-42b6-82fd-04b6ee1aa638, runId = 4dd10740-29dd-4275-97e2-a43104d71cf5] terminated with error
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): org.apache.kafka.common.errors.TimeoutException: Expiring 38 record(s) for testing-0: 30016 ms has passed since batch creation plus linger time
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1499)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1487)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1486)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
我的sbt详细信息:
libraryDependencies += "org.apache.spark" %% "spark-sql-kafka-0-10" % "2.2.0"
libraryDependencies += "org.apache.kafka" % "kafka-clients" % "0.10.0.0"
但是当我使用bin/kafka-console-producer.sh
和bin/kafka-console-consumer.sh
通过服务器发送消息时,我可以发送和接收消息
答案 0 :(得分:0)
您需要在客户端增加request.timeout.ms
的值。
Kafka将记录分组以提高吞吐量。将新记录添加到批处理中时,必须在时限内发送。 request.timeout.ms
是一个可配置参数(默认值为30秒),它控制此时间限制。
当批次排队较长时间时,将抛出TimeoutException
,并且记录将从队列中删除(因此将不传递消息)。