我正在使用docker swarm运行Hazelcast集群。即使节点建立连接
Members [1] {
Member [10.0.0.3]:5701 - b5fae3e3-0727-4bfd-8eb1-82706256ba2d this
}
May 27, 2017 2:38:12 PM com.hazelcast.internal.management.ManagementCenterService
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] Hazelcast will connect to Hazelcast Management Center on address:
http://10.0.0.3:8080/mancenter
May 27, 2017 2:38:12 PM com.hazelcast.internal.management.ManagementCenterService
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] Failed to pull tasks from management center
May 27, 2017 2:38:12 PM com.hazelcast.internal.management.ManagementCenterService
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] Failed to connect to:http://10.0.0.3:8080/mancenter/collector.do
May 27, 2017 2:38:12 PM com.hazelcast.core.LifecycleService
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] [10.0.0.3]:5701 is STARTED
May 27, 2017 2:38:12 PM com.hazelcast.internal.partition.impl.PartitionStateManager
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] Initializing cluster partition table arrangement...
May 27, 2017 2:38:19 PM com.hazelcast.internal.cluster.ClusterService
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8]
Members [2] {
Member [10.0.0.3]:5701 - b5fae3e3-0727-4bfd-8eb1-82706256ba2d this
Member [10.0.0.4]:5701 - b3bd51d4-9366-45f0-bb66-78e67b13268c
}
May 27, 2017 2:38:19 PM com.hazelcast.internal.partition.impl.MigrationManager
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] Re-partitioning cluster data... Migration queue size: 271
May 27, 2017 2:38:21 PM com.hazelcast.internal.partition.InternalPartitionService
在我不断遇到错误之后:
WARNING: [10.0.0.3]:5701 [kpts-cluster] [3.8] Wrong bind request from [10.0.0.3]:5701! This node is not requested endpoint: [10.0.0.2]:5701
May 27, 2017 2:45:06 PM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] Connection[id=18, /10.0.0.3:5701->/10.0.0.3:49575, endpoint=null, alive=false, type=MEMBER] closed. Reason: Wrong bind request from [10.0.0.3]:5701! This node is not requested endpoint: [10.0.0.2]:5701
May 27, 2017 2:45:06 PM com.hazelcast.nio.tcp.TcpIpConnection
INFO: [10.0.0.3]:5701 [kpts-cluster] [3.8] Connection[id=17, /10.0.0.2:49575->/10.0.0.2:5701, endpoint=[10.0.0.2]:5701, alive=false, type=MEMBER] closed. Reason: Connection closed by the other side
我想这必须对每个节点上的接口eth0做一些事情。分配了2个地址 - 一个"真实"还有一个"假的"集群管理器...由于某种原因,它被宣传为端点......
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
82: eth0@if83: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:00:03 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.3/24 scope global eth0
valid_lft forever preferred_lft forever
inet 10.0.0.2/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:aff:fe00:3/64 scope link
valid_lft forever preferred_lft forever
84: eth1@if85: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.3/16 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe12:3/64 scope link
valid_lft forever preferred_lft forever
86: eth2@if87: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:ff:00:07 brd ff:ff:ff:ff:ff:ff
inet 10.255.0.7/16 scope global eth2
valid_lft forever preferred_lft forever
inet 10.255.0.6/32 scope global eth2
valid_lft forever preferred_lft forever
inet6 fe80::42:aff:feff:7/64 scope link
valid_lft forever preferred_lft forever
以下是从其中一个节点读取的网络配置:
[
{
"Name": "hazelcast-net",
"Id": "ly1p50ykwjhf68k88220gxih6",
"Created": "2017-05-27T16:38:04.638580169+02:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Containers": {
"0fa2bd8f8e8e931e1140e2d4bee1b43ff1f7bd5e3049d95e9176c63fa9f47e4f": {
"Name": "kpts.1zhprrumdjvenkl4cvsc7bt40.2ugiv46ubar8utnxc5hko1hdf",
"EndpointID": "0c5681aebbacd27672c300742077a460c07a081d113c2238f4c707def735ebec",
"MacAddress": "02:42:0a:00:00:03",
"IPv4Address": "10.0.0.3/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4097"
},
"Labels": {},
"Peers": [
{
"Name": "c4-6f6cd87e898f",
"IP": "10.6.225.34"
},
{
"Name": "c5-77d9f542efe8",
"IP": "10.6.225.35"
}
]
}
]
答案 0 :(得分:0)
您可能会发现此前一期有用:
Docker networking - "This node is not requested endpoint" error #4537
现在更加重要。你有一个良好的连接工作,这就是节点能够加入的原因;但是,你最有可能(因为我没有你的hazelcast.xml而猜测)绑定到所有接口,因此你想要改变你的网络绑定只绑定到所需的地址。我们默认绑定到*,因为我们不知道您要使用哪个网络。
希望这有帮助,
答案 1 :(得分:0)
尝试使用docker swarm discovery SPI。它将为swarm提供一个自定义的AddressPicker实现,它完全消除了Hazelcast中的这个常量问题,界面选择和&#34;此节点不是请求端点&#34;错误&#34 ;.我真的希望他们能解决这个问题
https://github.com/bitsofinfo/hazelcast-docker-swarm-discovery-spi
import org.bitsofinfo.hazelcast.discovery.docker.swarm.SwarmAddressPicker;
...
Config conf =new ClasspathXmlConfig("yourHzConfig.xml");
NodeContext nodeContext = new DefaultNodeContext() {
@Override
public AddressPicker createAddressPicker(Node node) {
return new SwarmAddressPicker(new ILogger() {
// you provide the impl... or use provided "SystemPrintLogger"
});
}
};
HazelcastInstance hazelcastInstance = HazelcastInstanceFactory
.newHazelcastInstance(conf,"myAppName",nodeContext);