我遇到了跨不同服务器集群Infinispan(通过JGroups)的问题。当我尝试这种集群时:
...每个节点都很好地加入集群(我可以通过I' m假设JGroups)看到输出。
然而,当我尝试这种集群时:
...将应用程序部署到JBoss AS 7中的节点无法加入群集。
简而言之,似乎与应用程序一起部署的节点(即在战争中嵌入的Infinispan jar和配置)无法加入具有独立部署的节点的集群(通过子系统配置在xml中配置缓存容器)
要注意的是,我正在使用Infinispan 7(我现在承认它处于测试阶段,但我需要使用它,因为Infinispan 6中的JCache依赖项不幸地指向了1.0.0 PFD版本的cache-api依赖(JSR 107)。
我还要注意,我部署Infinispan和我的应用程序的原因是我部署的服务器必须具有Infinispan(即JBoss AS 7或在某些情况下很可能是WebSphere)。有人可以告诉我,更适合在应用服务器上配置Infinispan而不是用我的应用程序部署它,但在尝试用Infinispan 7子系统配置JBoss AS 7 1个多小时后,我放弃了。除非有其他建议,否则我上面的部署方案看似合法......
JGroups配置我使用我的应用程序进行部署:
<config xmlns="urn:org:jgroups" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.4.xsd">
<TCP bind_port="7802" />
<TCPPING timeout="3000" initial_hosts="${jgroups.tcpping.initial_hosts:MY_INTERNAL_DOMAIN[7800],localhost[7801],localhost[7802]}" port_range="1"
num_initial_members="3" />
<VERIFY_SUSPECT timeout="1500" />
<pbcast.NAKACK use_mcast_xmit="false" retransmit_timeout="300,600,1200,2400,4800" discard_delivered_msgs="true" />
<pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000" max_bytes="400000" />
<pbcast.GMS print_local_addr="true" join_timeout="3000" view_bundling="true" />
</config>
我在Infinispan 7独立服务器中配置的JGroups堆栈:
<stack name="tcp">
<transport type="TCP">
<property name="bind_port">7800</property>
</transport>
<protocol type="TCPPING">
<property name="timeout">3000</property>
<property name="initial_hosts">${jgroups.tcpping.initial_hosts:MY_INTERNAL_DOMAIN.atldev.com[7800],localhost[7801]}</property>
<property name="port_range">1</property>
<property name="num_initial_members">3</property>
</protocol>
<protocol type="VERIFY_SUSPECT">
<property name="timeout">1500</property>
</protocol>
<protocol type="pbcast.NAKACK">
<property name="use_mcast_xmit">false</property>
<property name="retransmit_timeout">300,600,1200,2400,4800</property>
<property name="discard_delivered_msgs">true</property>
</protocol>
<protocol type="pbcast.STABLE">
<property name="stability_delay">1000</property>
<property name="desired_avg_gossip">50000</property>
<property name="max_bytes">400000</property>
</protocol>
<protocol type="pbcast.GMS">
<property name="print_local_addr">true</property>
<property name="join_timeout">3000</property>
<property name="view_bundling">true</property>
</protocol>
</stack>
在Infinispan 7独立服务器中配置的缓存容器:
<subsystem xmlns="urn:infinispan:server:core:7.0" default-cache-container="ervm-caches">
<cache-container name="ervm-caches" default-cache="ervm-default-cache">
<transport stack="tcp" cluster="ervm-cluster"/>
<distributed-cache name="default" mode="SYNC" segments="20" owners="2" remote-timeout="30000" start="EAGER">
<locking acquire-timeout="30000" concurrency-level="1000" striping="false"/>
<transaction mode="NONE"/>
</distributed-cache>
<distributed-cache name="memcachedCache" mode="SYNC" segments="20" owners="2" remote-timeout="30000" start="EAGER">
<locking acquire-timeout="30000" concurrency-level="1000" striping="false"/>
<transaction mode="NONE"/>
</distributed-cache>
<replicated-cache name="ervm-default-cache" mode="SYNC" start="EAGER"/>
</cache-container>
</subsystem>
在我的应用程序(war)中配置并部署到JBoss AS 7的缓存容器:
<?xml version="1.0" encoding="UTF-8"?>
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:config:7.0 http://www.infinispan.org/schemas/infinispan-config-7.0.xsd"
xmlns="urn:infinispan:config:7.0">
<jgroups>
<stack-file name="tcp" path="ervm-jgroups-tcp.xml" />
<stack-file name="udp" path="ervm-jgroups-udp.xml" />
</jgroups>
<cache-container name="ervm-caches" default-cache="ervm-default-cache">
<transport stack="tcp" cluster="ervm-cluster" />
<replicated-cache name="ervm-default-cache" mode="SYNC" start="EAGER" />
</cache-container>
</infinispan>
答案 0 :(得分:0)
我不确定这是否有帮助,但您应该使用套接字绑定来指定端口等。:
<subsystem xmlns="urn:jboss:domain:jgroups:1.2" default-stack="udp">
<stack name="udp">
<transport type="UDP" socket-binding="jgroups-udp">
...
</stack>
</subsystem>
<interfaces>
<interface name="public">
<inet-address value="${iface.public}" />
</interface>
<interface name="clustering">
<inet-address value="${iface.clustering}" />
</interface>
</interfaces>
<socket-binding-group name="standard-sockets" default-interface="public">
<socket-binding name="jgroups-udp" interface="clustering" multicast-address="${udpGroup}" multicast-port="45688" port="55200" />
<socket-binding name="jgroups-udp-fd" interface="clustering" port="54200" />
此外,如果要通过协议(HotRod,Memcached ...)和嵌入式访问组合访问,则应启用compatibility mode。