scala中的Flink Kafka程序发出超时错误org.apache.kafka.common.errors.TimeoutException:60000 ms后无法更新元数据

时间:2017-12-15 11:30:18

标签: apache-flink flink-streaming

我正在编写一个Flink Kafka集成程序,如下所示:但是获取kafka的超时错误:

import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.connectors.kafka.{FlinkKafkaConsumer010, 
FlinkKafkaProducer010}
import org.apache.flink.streaming.util.serialization.SimpleStringSchema
import java.util.Properties

object StreamKafkaProducer {

def main(args: Array[String]) {
val env = StreamExecutionEnvironment.getExecutionEnvironment
val properties = new Properties()
properties.setProperty("bootstrap.servers", "localhost:9092")
properties.setProperty("zookeeper.connect", "localhost:2181")
properties.setProperty("serializer.class", "kafka.serializer.StringEncoder")


val stream: DataStream[String] =env.fromElements(
  ("Adam"),
  ("Sarah"))

val kafkaProducer = new FlinkKafkaProducer010[String](
  "localhost:9092",
  "output",
  new SimpleStringSchema
)
// write data into Kafka
stream.addSink(kafkaProducer)

env.execute("Flink kafka integration  ")
}
}

从终端我可以看到kafka和zookeeper正在运行但是当我从Intellij运行上面的程序时它显示了这个错误:

C:\Users\amdass\workspace\flink-project-master>sbt run
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
support was removed in 8.0
[info] Loading project definition from C:\Users\amdass\workspace\flink-
project-master\project
[info] Set current project to Flink Project (in build 
file:/C:/Users/amdass/workspace/flink-project-master/)
[info] Compiling 1 Scala source to C:\Users\amdass\workspace\flink-project-
master\target\scala-2.11\classes...
[info] Running org.example.StreamKafkaProducer
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
details.
Connected to JobManager at Actor[akka://flink/user/jobmanager_1#-563113020] 
with leader session id 5a637740-5c73-4f69-a19e-c8ef7141efa1.
12/15/2017 14:41:49     Job execution switched to status RUNNING.
12/15/2017 14:41:49     Source: Collection Source(1/1) switched to SCHEDULED
12/15/2017 14:41:49     Sink: Unnamed(1/4) switched to SCHEDULED
12/15/2017 14:41:49     Sink: Unnamed(2/4) switched to SCHEDULED
12/15/2017 14:41:49     Sink: Unnamed(3/4) switched to SCHEDULED
12/15/2017 14:41:49     Sink: Unnamed(4/4) switched to SCHEDULED
12/15/2017 14:41:49     Source: Collection Source(1/1) switched to DEPLOYING
12/15/2017 14:41:49     Sink: Unnamed(1/4) switched to DEPLOYING
12/15/2017 14:41:49     Sink: Unnamed(2/4) switched to DEPLOYING
12/15/2017 14:41:49     Sink: Unnamed(3/4) switched to DEPLOYING
12/15/2017 14:41:49     Sink: Unnamed(4/4) switched to DEPLOYING
12/15/2017 14:41:50     Source: Collection Source(1/1) switched to RUNNING
12/15/2017 14:41:50     Sink: Unnamed(2/4) switched to RUNNING
12/15/2017 14:41:50     Sink: Unnamed(4/4) switched to RUNNING
12/15/2017 14:41:50     Sink: Unnamed(3/4) switched to RUNNING
12/15/2017 14:41:50     Sink: Unnamed(1/4) switched to RUNNING
12/15/2017 14:41:50     Source: Collection Source(1/1) switched to FINISHED
12/15/2017 14:41:50     Sink: Unnamed(3/4) switched to FINISHED
12/15/2017 14:41:50     Sink: Unnamed(4/4) switched to FINISHED
12/15/2017 14:42:50     Sink: Unnamed(1/4) switched to FAILED
<b>  org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
after 60000 ms. </b>

12/15/2017 14:42:50     Sink: Unnamed(2/4) switched to FAILED
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata 
after 60000 ms.

12/15/2017 14:42:50     Job execution switched to status FAILING.

org.apache.kafka.common.errors.TimeoutException:60000毫秒后无法更新元数据。 12/15/2017 14:42:50工作执行切换到状态FAILED。 [error](run-main-0)org.apache.flink.runtime.client.JobExecutionException:作业执行失败。 org.apache.flink.runtime.client.JobExecutionException:作业执行失败。         在org.apache.flink.runtime.jobmanager.JobManager $$ anonfun $ handleMessage $ 1 $$ anonfun $ applyOrElse $ 6.apply $ mcV $ sp(JobManager.scala:933)         在org.apache.flink.runtime.jobmanager.JobManager $$ anonfun $ handleMessage $ 1 $$ anonfun $ applyOrElse $ 6.apply(JobManager.scala:876)         在org.apache.flink.runtime.jobmanager.JobManager $$ anonfun $ handleMessage $ 1 $$ anonfun $ applyOrElse $ 6.apply(JobManager.scala:876)         在scala.concurrent.impl.Future $ PromiseCompletingRunnable.liftedTree1 $ 1(Future.scala:24)         在scala.concurrent.impl.Future $ PromiseCompletingRunnable.run(Future.scala:24)         at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)         at akka.dispatch.ForkJoinExecutorConfigurator $ AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)         在scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)         在scala.concurrent.forkjoin.ForkJoinPool $ WorkQueue.runTask(ForkJoinPool.java:1339)         在scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)         在

scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107     )     引起:org.apache.kafka.common.errors.TimeoutException:无法更新     60000毫秒后的元数据。     [trace]抑制堆栈跟踪:运行最后*:运行以获得完整输出。     java.lang.RuntimeException:非零退出代码:1         在scala.sys.package $ .error(package.scala:27)     [trace]抑制堆栈跟踪:运行最后一次编译:运行完整输出。     [错误](编译:运行)非零退出代码:1     [错误]总时间:75秒,完成于2017年12月15日下午2:42:51

1 个答案:

答案 0 :(得分:0)

请检查并确保您的Kafka服务器正在运行。 当您的Flink程序无法连接到Kafka服务器时,通常会出现此错误。 Flink会自动尝试连接到Kafka服务器并达到一定的阈值时间。一旦达到此阈值且Flink仍无法与Kafka建立连接,则会抛出此 org.apache.kafka.common.errors.TimeoutException < /强>

请检查您的Kafka服务器详细信息,Kafka主题,并验证您的Kafka服务器是否正在运行。