如何在多个artemis经纪人之间分割排队的消息

时间:2018-06-26 09:07:18

标签: jms activemq-artemis

我浏览了examples,文档并在Google上没有运气,所以我在这里问。

场景:“重新平衡” 排队消息

给出一个在每个节点上带有持久队列(testQueue)的集群(masterA和masterB,udp,roundrobin)。每个节点都是一台不同的服务器。

以及收到的消息

当masterA关闭(无论出于何种原因)时,消息仅由在masterB上注册的使用者使用。邮件不断到达,因此masterB上的messageCount不断增加。

重新启动masterA时,它将仅使用新消息(或旧消息在停止/崩溃之前在队列中+新消息),而masterB在队列中仍然有很多旧消息。

是否可以将消息从masterB重新分发到masterA,以便连接到masterA的使用者可以使用“旧”消息(最初存储在masterB中)?我的邮件中包含的数据存在时间较短(例如一次性付款验证代码)。我知道HA,但是在主服务器和从服务器都关闭的情况下(使用broker.xml中的ha-policy / replcation / master / group-name conf的masterA对与slaveA以及masterB对与slaveB一起使用)是必需的。

Diverts不能用于实现此目的,因为我需要将消息重新路由到另一台服务器(具有“免费”匹配使用者的服务器)。

  

转移只会将邮件转移到同一服务器上的地址,但是,如果您想转移到另一台服务器上的地址,常见的模式是转移到本地存储转发队列,然后建立一个网桥,该网桥从该队列中使用并转发到另一台服务器上的地址。

而且我不希望使用永久桥来取消手动操作。

Message redelivery在没有使用者的情况下使用。我的用例是“消费者饥饿”(一个节点上的消费者什么都不做,而另一个节点上的消费者却忙于100%)。这与what is described in the clustering doc不同。

是否可以在不使用共享存储的情况下在旧节点和新添加的节点之间共享消息?我不想使用共享存储,因为我想避免网络共享之类的事情。 该应用程序托管在虚拟环境中,有时VM会从一个ESX移至另一个(vmotion)并可能发生冻结。冻结共享存储时,没有任何使用者可以使用由于TTL数据短而造成不良消息。

如果这很重要,则不会打开虚拟环境进行讨论。

我将artemis 2.6.0与JMS api一起使用

消费者是托管在wildfly 12上的MDB,它似乎运行良好(连接重新平衡非常好)。

感谢您的帮助

编辑: 据我了解,看起来已经有a jira about this

  

在某些事件后节点负载过重后重新平衡负载。

     

可能是在故障转移之后,或者是在扩展之后(在集群上启动新节点)

     

不确定这只是关于连接还是关于数据(队列和主题)。我们相信,如果仅在连接级别上执行此操作,ACTIVEMQ6-25将负责重新分配数据。

编辑:

masterA的broker.xml

<?xml version='1.0'?>

<configuration xmlns="urn:activemq"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xmlns:xi="http://www.w3.org/2001/XInclude"
               xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

   <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="urn:activemq:core ">

      <name>broker-0</name>


      <persistence-enabled>true</persistence-enabled>

      <!-- this could be ASYNCIO, MAPPED, NIO
           ASYNCIO: Linux Libaio
           MAPPED: mmap files
           NIO: Plain Java Files
       -->
      <journal-type>NIO</journal-type>

      <paging-directory>data/paging</paging-directory>

      <bindings-directory>data/bindings</bindings-directory>

      <journal-directory>data/journal</journal-directory>

      <large-messages-directory>data/large-messages</large-messages-directory>

      <journal-datasync>true</journal-datasync>

      <journal-min-files>2</journal-min-files>

      <journal-pool-files>10</journal-pool-files>

      <journal-file-size>10M</journal-file-size>

      <journal-buffer-timeout>1208000</journal-buffer-timeout>


      <!--
        When using ASYNCIO, this will determine the writing queue depth for libaio.
       -->
      <journal-max-io>1</journal-max-io>

    <connectors>
        <!-- Connector used to be announced through cluster connections and notifications -->
        <connector name="artemis-master-1">tcp://localhost:61616</connector>
        <!--        <connector name="artemis-slave-1">tcp://localhost:61617</connector>         -->
    </connectors>



      <!-- how often we are looking for how many bytes are being used on the disk in ms -->
      <disk-scan-period>5000</disk-scan-period>

      <!-- once the disk hits this limit the system will block, or close the connection in certain protocols
           that won't support flow control. -->
      <max-disk-usage>90</max-disk-usage>

      <!-- should the broker detect dead locks and other issues -->
      <critical-analyzer>true</critical-analyzer>

      <critical-analyzer-timeout>120000</critical-analyzer-timeout>

      <critical-analyzer-check-period>60000</critical-analyzer-check-period>

      <critical-analyzer-policy>HALT</critical-analyzer-policy>

      <acceptors>

         <!-- Acceptor for every supported protocol -->
         <acceptor name="artemis">tcp://localhost:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor>

         <!-- AMQP Acceptor.  Listens on default AMQP port for AMQP traffic.-->
         <acceptor name="amqp">tcp://localhost:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpMinCredits=300</acceptor>

         <!-- STOMP Acceptor. -->
         <acceptor name="stomp">tcp://localhost:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>

         <!-- HornetQ Compatibility Acceptor.  Enables HornetQ Core and STOMP for legacy HornetQ clients. -->
         <acceptor name="hornetq">tcp://localhost:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>

         <!-- MQTT Acceptor -->
         <acceptor name="mqtt">tcp://localhost:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>

      </acceptors>


      <cluster-user>admin</cluster-user>

      <cluster-password>admin</cluster-password>

      <broadcast-groups>
         <broadcast-group name="bg-group1">
            <group-address>231.7.7.7</group-address>
            <group-port>9876</group-port>
            <broadcast-period>1801</broadcast-period>
            <connector-ref>artemis-master-1</connector-ref>
         </broadcast-group>
      </broadcast-groups>

      <discovery-groups>
         <discovery-group name="dg-group1">
            <group-address>231.7.7.7</group-address>
            <group-port>9876</group-port>

            <refresh-timeout>8001</refresh-timeout>
         </discovery-group>
      </discovery-groups>

      <cluster-connections>
         <cluster-connection name="my-cluster">
            <connector-ref>artemis-master-1</connector-ref>
            <check-period>800</check-period>
              <connection-ttl>2001</connection-ttl>
              <initial-connect-attempts>4</initial-connect-attempts>
              <reconnect-attempts>1</reconnect-attempts>
            <message-load-balancing>ON_DEMAND</message-load-balancing>
            <max-hops>1</max-hops>
            <notification-interval>500</notification-interval>
            <notification-attempts>1</notification-attempts>
            <discovery-group-ref discovery-group-name="dg-group1"/>
         </cluster-connection>
      </cluster-connections>


      <ha-policy>
         <replication>
            <master>
              <group-name>grappe-1</group-name>
              <check-for-live-server>true</check-for-live-server>            
            </master>
         </replication>
      </ha-policy>

      <security-settings>
         <security-setting match="#">
            <permission type="createNonDurableQueue" roles="amq"/>
            <permission type="deleteNonDurableQueue" roles="amq"/>
            <permission type="createDurableQueue" roles="amq"/>
            <permission type="deleteDurableQueue" roles="amq"/>
            <permission type="createAddress" roles="amq"/>
            <permission type="deleteAddress" roles="amq"/>
            <permission type="consume" roles="amq"/>
            <permission type="browse" roles="amq"/>
            <permission type="send" roles="amq"/>
            <!-- we need this otherwise ./artemis data imp wouldn't work -->
            <permission type="manage" roles="amq"/>
         </security-setting>
      </security-settings>

      <address-settings>
         <!-- if you define auto-create on certain queues, management has to be auto-create -->
         <address-setting match="activemq.management#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
         <!--default for catch all-->
         <address-setting match="#">
            <redistribution-delay>5000</redistribution-delay>
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
      </address-settings>

      <addresses>
         <address name="DLQ">
            <anycast>
               <queue name="DLQ" />
            </anycast>
         </address>
         <address name="ExpiryQueue">
            <anycast>
               <queue name="ExpiryQueue" />
            </anycast>
         </address>

      </addresses>

   </core>
</configuration>

masterB的broker.xml

<?xml version='1.0'?>  <configuration xmlns="urn:activemq"
               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xmlns:xi="http://www.w3.org/2001/XInclude"
               xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

   <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
           xsi:schemaLocation="urn:activemq:core">

      <name>broker-2</name>


      <persistence-enabled>true</persistence-enabled>

      <!-- this could be ASYNCIO, MAPPED, NIO
           ASYNCIO: Linux Libaio
           MAPPED: mmap files
           NIO: Plain Java Files
       -->
      <journal-type>NIO</journal-type>

      <paging-directory>data/paging</paging-directory>

      <bindings-directory>data/bindings</bindings-directory>

      <journal-directory>data/journal</journal-directory>

      <large-messages-directory>data/large-messages</large-messages-directory>

      <journal-datasync>true</journal-datasync>

      <journal-min-files>2</journal-min-files>

      <journal-pool-files>10</journal-pool-files>

      <journal-file-size>10M</journal-file-size>

      <journal-buffer-timeout>1160000</journal-buffer-timeout>


      <!--
        When using ASYNCIO, this will determine the writing queue depth for libaio.
       -->
      <journal-max-io>1</journal-max-io>
      <!--
        You can verify the network health of a particular NIC by specifying the <network-check-NIC> element.
         <network-check-NIC>theNicName</network-check-NIC>
        -->

    <connectors>
        <!-- Connector used to be announced through cluster connections and notifications -->
        <connector name="artemis-master-2">tcp://localhost:61718</connector>        <!--        <connector name="artemis-slave-1">tcp://localhost:61617</connector>         -->
    </connectors>

      <!-- how often we are looking for how many bytes are being used on the disk in ms -->
      <disk-scan-period>5000</disk-scan-period>

      <!-- once the disk hits this limit the system will block, or close the connection in certain protocols
           that won't support flow control. -->
      <max-disk-usage>90</max-disk-usage>

      <!-- should the broker detect dead locks and other issues -->
      <critical-analyzer>true</critical-analyzer>

      <critical-analyzer-timeout>120000</critical-analyzer-timeout>

      <critical-analyzer-check-period>60000</critical-analyzer-check-period>

      <critical-analyzer-policy>HALT</critical-analyzer-policy>

      <!-- the system will enter into page mode once you hit this limit.
           This is an estimate in bytes of how much the messages are using in memory

            The system will use half of the available memory (-Xmx) by default for the global-max-size.
            You may specify a different value here if you need to customize it to your needs.

            <global-max-size>100Mb</global-max-size>

      -->

      <acceptors>

         <!-- Acceptor for every supported protocol -->
         <acceptor name="artemis">tcp://localhost:61718?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor>

         <!-- AMQP Acceptor.  Listens on default AMQP port for AMQP traffic.-->
         <acceptor name="amqp">tcp://localhost:5772?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor>

         <!-- STOMP Acceptor. -->
         <acceptor name="stomp">tcp://localhost:61713?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>

         <!-- HornetQ Compatibility Acceptor.  Enables HornetQ Core and STOMP for legacy HornetQ clients. -->
         <acceptor name="hornetq">tcp://localhost:5545?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>

         <!-- MQTT Acceptor -->
         <acceptor name="mqtt">tcp://localhost:1983?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>

      </acceptors>


      <cluster-user>admin</cluster-user>

      <cluster-password>admin</cluster-password>

      <broadcast-groups>
         <broadcast-group name="bg-group1">
            <group-address>231.7.7.7</group-address>
            <group-port>9876</group-port>
            <broadcast-period>1803</broadcast-period>
            <connector-ref>artemis-master-2</connector-ref>
         </broadcast-group>
      </broadcast-groups>

      <discovery-groups>
         <discovery-group name="dg-group1">
            <group-address>231.7.7.7</group-address>
            <group-port>9876</group-port>
            <refresh-timeout>8003</refresh-timeout>
         </discovery-group>
      </discovery-groups>

      <cluster-connections>
         <cluster-connection name="my-cluster">
            <connector-ref>artemis-master-2</connector-ref>
            <check-period>800</check-period>
            <connection-ttl>2003</connection-ttl>
            <initial-connect-attempts>4</initial-connect-attempts>
            <reconnect-attempts>1</reconnect-attempts>
            <message-load-balancing>ON_DEMAND</message-load-balancing>
            <max-hops>1</max-hops>
            <notification-interval>500</notification-interval>
            <notification-attempts>1</notification-attempts>
            <discovery-group-ref discovery-group-name="dg-group1"/>
         </cluster-connection>
      </cluster-connections>


      <ha-policy>
        <replication>
            <master>
              <group-name>grappe-2</group-name>
              <check-for-live-server>true</check-for-live-server>
            </master>
        </replication>
      </ha-policy>

      <security-settings>
         <security-setting match="#">
            <permission type="createNonDurableQueue" roles="amq"/>
            <permission type="deleteNonDurableQueue" roles="amq"/>
            <permission type="createDurableQueue" roles="amq"/>
            <permission type="deleteDurableQueue" roles="amq"/>
            <permission type="createAddress" roles="amq"/>
            <permission type="deleteAddress" roles="amq"/>
            <permission type="consume" roles="amq"/>
            <permission type="browse" roles="amq"/>
            <permission type="send" roles="amq"/>
            <!-- we need this otherwise ./artemis data imp wouldn't work -->
            <permission type="manage" roles="amq"/>
         </security-setting>
      </security-settings>

      <address-settings>
         <!-- if you define auto-create on certain queues, management has to be auto-create -->
         <address-setting match="activemq.management#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
         <!--default for catch all-->
         <address-setting match="#">
            <redistribution-delay>5000</redistribution-delay>
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
      </address-settings>

      <addresses>
         <address name="DLQ">
            <anycast>
               <queue name="DLQ" />
            </anycast>
         </address>
         <address name="ExpiryQueue">
            <anycast>
               <queue name="ExpiryQueue" />
            </anycast>
         </address>

      </addresses>
    </core>
</configuration>

用于jndi查找的属性

java.naming.factory.initial:org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory
java.naming.provider.url: udp://231.7.7.7:9876
connectionFactory.ConnectionFactory: udp://231.7.7.7:9876?type=CF&refreshTimeout=800&retryInterval=1111&reconnectAttempts=3&connectionTtl=6000&connection-ttl-check-interval=600&clientFailureCheckPeriod=5000&initialConnectAttempts=9&discoveryGroupRef=dg-group-1&connection-load-balancing-policy-class-name=org.apache.activemq.artemis.api.core.client.loadbalance.RoundRobinConnectionLoadBalancingPolicy

0 个答案:

没有答案