从/ hostname:55306(org.apache.zookeeper.server.NIOServerCnxnFactory)接受的套接字连接

时间:2018-07-03 07:34:20

标签: hadoop apache-kafka apache-zookeeper apache-storm

我已经配置了Kafka集群,Storm集群和Hadoop集群。当他们没有工作时,一切都会正常。

当我以独立模式提交Storm jar(从kafka获取数据并进行处理,然后将其存储到Hdfs中)时,它工作正常

将其配置为服务器属性相同的代码并在服务器上运行后,会出现以下错误:

[2018-07-03 12:54:00,370] INFO Accepted socket connection from /192.168.3.222:55306 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2018-07-03 12:54:00,381] INFO Client attempting to establish new session at /192.168.3.222:55306 (org.apache.zookeeper.server.ZooKeeperServer)
[2018-07-03 12:54:00,383] INFO Established session 0x3645ed69ca40031 with negotiated timeout 20000 for client /192.168.3.222:55306 (org.apache.zookeeper.server.ZooKeeperServer)
[2018-07-03 12:54:02,429] WARN caught end of stream exception (org.apache.zookeeper.server.NIOServerCnxn)

EndOfStreamException: Unable to read additional data from client sessionid 0x3645ed69ca40031, likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:239)
at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:203)
at java.lang.Thread.run(Thread.java:748)

[2018-07-03 12:54:02,433] INFO Closed socket connection for client /192.168.3.222:55306 which had sessionid 0x3645ed69ca40031 
(org.apache.zookeeper.server.NIOServerCnxn)
[2018-07-03 12:54:06,000] INFO Expiring session 0x1645ed69c8c0041, timeout of 20000ms exceeded (org.apache.zookeeper.server.ZooKeeperServer)
[2018-07-03 12:54:06,000] INFO Processed session termination for sessionid: 0x1645ed69c8c0041 
(org.apache.zookeeper.server.PrepRequestProcessor)

我使用的各个版本:

  • apache-storm-1.0.6
  • kafka_2.11-1.0.1
  • zookeeper-3.4.12
  • hadoop-2.9.1

雨云日志

2018-07-04 12:28:54.455 o.a.s.d.nimbus timer [INFO] Setting new assignment for topology id test-topology-1-1530686803: #org.apache.storm.daemon.common.Assignment{:master-code-dir "/usr/local/apache-services/data/storm", :node->host {"7c98bf5a-38d5-4a13-95ad-966be3a51c49" "datanode2.sakha.com"}, :executor->node+port {[2 2] ["7c98bf5a-38d5-4a13-95ad-966be3a51c49" 6700], [1 1] ["7c98bf5a-38d5-4a13-95ad-966be3a51c49" 6700], [3 3] ["7c98bf5a-38d5-4a13-95ad-966be3a51c49" 6700]}, :executor->start-time-secs {[1 1] 1530687534, [2 2] 1530687534, [3 3] 1530687534}, :worker->resources {["7c98bf5a-38d5-4a13-95ad-966be3a51c49" 6700] [0.0 0.0 0.0]}, :owner "hduser"}
2018-07-04 12:28:54.520 o.a.s.d.nimbus pool-14-thread-7 [INFO] Created download session for test-topology-1-1530686803-stormjar.jar with id a9762861-224e-4f40-824b-ae0efa687452

主管日志

2018-07-04 12:30:46.461 o.a.s.d.s.Container SLOT_6700 [INFO] Creating symlinks for worker-id: b9c3daa0-4f4d-42d7-9963-e93b6e6179a3 storm-id: test-topology-1-1530686803 for files(0): []
2018-07-04 12:30:46.461 o.a.s.d.s.Container SLOT_6700 [INFO] Topology jar for worker-id: b9c3daa0-4f4d-42d7-9963-e93b6e6179a3 storm-id: test-topology-1-1530686803 does not contain re sources directory /usr/local/apache-services/data/storm/supervisor/stormdist/test-topology-1-1530686803/resources.
2018-07-04 12:30:46.461 o.a.s.d.s.BasicContainer SLOT_6700 [INFO] Launching worker with assignment LocalAssignment(topology_id:test-topology-1-1530686803, executors:[ExecutorInfo(task_start:2, task_end:2), ExecutorInfo(task_start:1, task_end:1), ExecutorInfo(task_start:3, task_end:3)], resources:WorkerResources(mem_on_heap:0.0, mem_off_heap:0.0, cpu:0.0), owner:hduser) for this supervisor 7c98bf5a-38d5-4a13-95ad-966be3a51c49 on port 6700 with id b9c3daa0-4f4d-42d7-9963-e93b6e6179a3

1 个答案:

答案 0 :(得分:1)

您的依赖树有问题。您发布了您的工作日志中有java.lang.NoSuchMethodError: org.apache.hadoop.security.authentication.util.KerberosUtil.hasKerberosTicket的信息。这表明您在提交jar时在类路径上使用了错误的Hadoop jar版本,或者您可能完全丢失了jar。

这是Storm-hdfs https://github.com/apache/storm/blob/v1.0.6/external/storm-hdfs/pom.xml的pom。默认情况下,它将根据Hadoop 2.6.1进行编译。如果要使用其他Hadoop版本,则需要确保用pom中的较新版本替换列出的Hadoop依赖项(即,您需要在pom的2.9.1版中手动列出例如hadoop-client)。 / p>

一个调试此项目的好工具是在项目中运行mvn dependency:tree,这将使您知道要在构建中包括哪些jar版本。