o.apache.kafka.common.network.Selector:使用localhost / 127.0.0.1进行I / O错误

时间:2016-06-03 18:45:36

标签: java apache-kafka spring-cloud-stream

我的应用程序消耗来自在一台计算机上运行的Kafka服务器的消息,然后将它们转发到另一个实例上运行的另一个远程Kafka。将我的应用程序部署到Cloud Foundry并向第一个Kafka服务器发送消息后,应用程序按预期工作。消息被消耗并转发到远程Kafka。

然而,之后我在Cloud Foundry中获得了以下异常的无限循环(并且在我的本地机器上以较慢的速度):

StackTrace:

Fri Jun 03 2016 12:20:34 GMT-0600 (Mountain Daylight Time) [App/0] OUT 2016-06-03 18:20:34.900 WARN 29 --- [ad | producer-1] o.apache.kafka.common.network.Selector : Error in I/O with localhost/127.0.0.1
Fri Jun 03 2016 12:20:34 GMT-0600 (Mountain Daylight Time) [App/0] OUT at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[na:1.8.0_65-]
Fri Jun 03 2016 12:20:34 GMT-0600 (Mountain Daylight Time) [App/0] OUT java.net.ConnectException: Connection refused
Fri Jun 03 2016 12:20:34 GMT-0600 (Mountain Daylight Time) [App/0] OUT at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[na:1.8.0_65-]
Fri Jun 03 2016 12:20:34 GMT-0600 (Mountain Daylight Time) [App/0] OUT at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122) [kafka-clients-0.8.2.2.jar!/:na]
Fri Jun 03 2016 12:20:34 GMT-0600 (Mountain Daylight Time) [App/0] OUT at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191) [kafka-clients-0.8.2.2.jar!/:na]
Fri Jun 03 2016 12:20:34 GMT-0600 (Mountain Daylight Time) [App/0] OUT at org.apache.kafka.common.network.Selector.poll(Selector.java:238) ~[kafka-clients-0.8.2.2.jar!/:na]
Fri Jun 03 2016 12:20:34 GMT-0600 (Mountain Daylight Time) [App/0] OUT at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192) [kafka-clients-0.8.2.2.jar!/:na]
Fri Jun 03 2016 12:20:34 GMT-0600 (Mountain Daylight Time) [App/0] OUT at java.lang.Thread.run(Thread.java:745) [na:1.8.0_65-]

我的应用程序yaml文件是这样的

申请YML:

spring:
  cloud:
    stream:
      bindings:
        activationMsgQueue:
          binder: kafka1
          destination: test
          contentType: application/json
          consumer:
            resetOffsets: true
            startOffset: latest
        input:
          binder: kafka2
          content-type: application/x-java-object;type=com.comcast.activation.message.vo.ActivationDataInfo
          destination: test
          group: prac  
      binders:
        kafka1:
          type: kafka
          environment:
            spring:
              kafka:
                host: caapmsg-as-a1p.sys.comcast.net
        kafka2:
          type: kafka
          environment:
            spring:
              kafka:
                host: caapmsg-as-a3p.sys.comcast.net
      default-binder: kafka2                    
      kafka:
        binder:
          zk-nodes: caapmsg-as-a1p.sys.comcast.net, caapmsg-as-a3p.sys.comcast.net

我观察到如果我在下面包含配置,则错误消失,但现在我有一个无限循环的消息被消费和发送。

SNIPPET:

kafka:
        binder:
           brokers: caapmsg-as-a1p.sys.comcast.net, caapmsg-as-a3p.sys.comcast.net
          zk-nodes: caapmsg-as-a1p.sys.comcast.net, caapmsg-as-a3p.sys.comcast.net

我需要做些什么来阻止这种无限循环?

嗨Marius,感谢您回复SOS电话。我对上述问题进行了改进。如果消息有效,则流现在从a1p(topic:test)消耗,并转发到a3p(topic:test),否则将错误消息发送到a1p(topic:errorMsgQueue)。我有以下申请。 yml文件

弹簧:   云:     流:       绑定:         errorMsgQueue:           粘合剂:kafka1           destination:errorMsgQueue           contentType:application / json         输入:           粘合剂:kafka2           content-type:application / x-java-object; type = com.comcast.activation.message.vo.ActivationDataInfo           目的地:测试           group:prac
        activationMsgQueue:           binder:kafka3           目的地:测试           contentType:application / json       粘合剂:         kafka1:           类型:卡夫卡           环境:             弹簧:               云:                 流:                   卡夫卡:                     粘结剂:                       经纪人:caapmsg-as-a1p.sys.comcast.net                       zk-nodes:caapmsg-as-a1p.sys.comcast.net         kafka2:           类型:卡夫卡           环境:             弹簧:              云:                流:                  卡夫卡:                    粘结剂:                      经纪人:caapmsg-as-a3p.sys.comcast.net                      zk-nodes:caapmsg-as-a3p.sys.comcast.net         kafka3:           类型:卡夫卡           环境:             弹簧:              云:                流:                  卡夫卡:                    粘结剂:                      经纪人:caapmsg-as-a1p.sys.comcast.net                      zk-nodes:caapmsg-as-a1p.sys.comcast.net       default-binder:kafka2

我仍然无限循环。我做错了什么?

1 个答案:

答案 0 :(得分:0)

spring.kafka.host不是Spring Cloud Stream的有效配置选项。 http://docs.spring.io/spring-cloud-stream/docs/1.0.0.RELEASE/reference/htmlsingle/index.html#_kafka_binder_properties是活页夹支持的唯一属性。

此外,您的应用程序似乎正在混合两个群集的配置。 (我假设它们是独立的集群?)

它应该是这样的:

spring: cloud: stream: bindings: activationMsgQueue: binder: kafka1 destination: test contentType: application/json consumer: resetOffsets: true startOffset: latest input: binder: kafka2 content-type: application/x-java-object;type=com.comcast.activation.message.vo.ActivationDataInfo destination: test group: prac
binders: kafka1: type: kafka environment: spring: cloud: stream: kafka: binder: brokers: caapmsg-as-a1p.sys.comcast.net zk-nodes: caapmsg-as-a1p.sys.comcast.net kafka2: type: kafka environment: spring: cloud: stream: kafka: binder: brokers: caapmsg-as-a3p.sys.comcast.net zk-nodes: caapmsg-as-a3p.sys.comcast.net default-binder: kafka2

有关详细信息,请参阅此示例https://github.com/spring-cloud/spring-cloud-stream-samples/blob/master/multibinder-differentsystems/src/main/resources/application.yml

我怀疑无限循环是以某种方式通过向同一主题发送和接收消息引起的。