Hibernate Search + infinispan + jgroups后端奴隶锁定问题

时间:2017-08-03 21:18:14

标签: hibernate-search infinispan jgroups

我是hibernate搜索的新手。我们决定使用hibernate搜索我的应用程序。我们选择jgroup作为后端。这是我的配置文件。

<?xml version="1.0" encoding="UTF-8"?>
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:7.0 
http://www.infinispan.org/schemas/infinispan-config-7.0.xsd
                    urn:infinispan:config:store:jdbc:7.0 http://www.infinispan.org/schemas/infinispan-cachestore-jdbc-config-7.0.xsd"
xmlns="urn:infinispan:config:7.0"
xmlns:jdbc="urn:infinispan:config:store:jdbc:7.0">

<!-- *************************** -->
<!-- System-wide global settings -->
<!-- *************************** -->
<jgroups>
    <!-- Note that the JGroups transport uses sensible defaults if no configuration
        property is defined. See the JGroupsTransport javadocs for more flags.
        jgroups-udp.xml is the default stack bundled in the Infinispan core jar: integration
        and tuning are tested by Infinispan. -->
  <stack-file name="default-jgroups-tcp" path="proform-jgroups.xml" />
</jgroups>

<cache-container name="HibernateSearch" default-cache="default" statistics="false" shutdown-hook="DONT_REGISTER">

    <transport stack="default-jgroups-tcp" cluster="venkatcluster"/>

    <!-- Duplicate domains are allowed so that multiple deployments with default configuration
        of Hibernate Search applications work - if possible it would be better to use JNDI to share
        the CacheManager across applications -->
    <jmx duplicate-domains="true" />

     <!-- *************************************** -->
     <!--  Cache to store Lucene's file metadata  -->
     <!-- *************************************** -->
     <replicated-cache name="LuceneIndexesMetadata" mode="SYNC" remote-timeout="25000">
        <transaction mode="NONE"/>
        <state-transfer enabled="true" timeout="480000" await-initial-transfer="true" />
        <indexing index="NONE" />
        <eviction max-entries="-1" strategy="NONE"/>
        <expiration max-idle="-1"/>
        <persistence passivation="false">
            <jdbc:string-keyed-jdbc-store preload="true" fetch-state="true" read-only="false" purge="false">
                <property name="key2StringMapper">org.infinispan.lucene.LuceneKey2StringMapper</property>
                <jdbc:connection-pool connection-url="jdbc:mysql://localhost:3306/entityindex" driver="com.mysql.jdbc.Driver" password="pf_user1!" username="pf_user"></jdbc:connection-pool>
                <jdbc:string-keyed-table drop-on-exit="false" create-on-start="true" prefix="ISPN_STRING_TABLE">
                    <jdbc:id-column name="ID" type="VARCHAR(255)"/>
                    <jdbc:data-column name="DATA" type="BLOB"/>
                    <jdbc:timestamp-column name="TIMESTAMP" type="BIGINT"/>
                </jdbc:string-keyed-table>
            </jdbc:string-keyed-jdbc-store>
        </persistence>
     </replicated-cache>

     <!-- **************************** -->
     <!--  Cache to store Lucene data  -->
     <!-- **************************** -->
     <distributed-cache name="LuceneIndexesData" mode="SYNC" remote-timeout="25000">
        <transaction mode="NONE"/>
        <state-transfer enabled="true" timeout="480000" await-initial-transfer="true" />
        <indexing index="NONE" />
        <eviction max-entries="-1" strategy="NONE"/>
        <expiration max-idle="-1"/>
        <persistence passivation="false">
            <jdbc:string-keyed-jdbc-store preload="true" fetch-state="true" read-only="false" purge="false">
                <property name="key2StringMapper">org.infinispan.lucene.LuceneKey2StringMapper</property>
                <jdbc:connection-pool connection-url="jdbc:mysql://localhost:3306/entityindex" driver="com.mysql.jdbc.Driver" password="pf_user1!" username="pf_user"></jdbc:connection-pool>
                <jdbc:string-keyed-table drop-on-exit="false" create-on-start="true" prefix="ISPN_STRING_TABLE">
                    <jdbc:id-column name="ID" type="VARCHAR(255)"/>
                    <jdbc:data-column name="DATA" type="BLOB"/>
                    <jdbc:timestamp-column name="TIMESTAMP" type="BIGINT"/>
                </jdbc:string-keyed-table>
            </jdbc:string-keyed-jdbc-store>
        </persistence>
     </distributed-cache>

     <!-- ***************************** -->
     <!--  Cache to store Lucene locks  -->
     <!-- ***************************** -->
    <replicated-cache name="LuceneIndexesLocking" mode="SYNC" remote-timeout="25000">
        <transaction mode="NONE"/>
        <state-transfer enabled="true" timeout="480000" await-initial-transfer="true" />
        <indexing index="NONE" />
        <eviction max-entries="-1" strategy="NONE"/>
        <expiration max-idle="-1"/>
        <persistence passivation="false">
            <jdbc:string-keyed-jdbc-store preload="true" fetch-state="true" read-only="false" purge="false">
                <property name="key2StringMapper">org.infinispan.lucene.LuceneKey2StringMapper</property>
                <jdbc:connection-pool connection-url="jdbc:mysql://localhost:3306/entityindex" driver="com.mysql.jdbc.Driver" password="pf_user1!" username="pf_user"></jdbc:connection-pool>
                <jdbc:string-keyed-table drop-on-exit="false" create-on-start="true" prefix="ISPN_STRING_TABLE">
                    <jdbc:id-column name="ID" type="VARCHAR(255)"/>
                    <jdbc:data-column name="DATA" type="BLOB"/>
                    <jdbc:timestamp-column name="TIMESTAMP" type="BIGINT"/>
                </jdbc:string-keyed-table>
            </jdbc:string-keyed-jdbc-store>
        </persistence>
    </replicated-cache>

</cache-container>

这是我的jgroups文件:

   <config xmlns="urn:org:jgroups"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="urn:org:jgroups 
   http://www.jgroups.org/schema/JGroups-3.6.xsd">
   <TCP bind_addr="${jgroups.tcp.address:127.0.0.1}"
    bind_port="${jgroups.tcp.port:7801}"
    enable_diagnostics="false"
    thread_naming_pattern="pl"
    send_buf_size="640k"
    sock_conn_timeout="300"

    thread_pool.min_threads="${jgroups.thread_pool.min_threads:2}"
    thread_pool.max_threads="${jgroups.thread_pool.max_threads:30}"
    thread_pool.keep_alive_time="60000"
    thread_pool.queue_enabled="false"  
    internal_thread_pool.min_threads= 
    "${jgroups.internal_thread_pool.min_threads:5}"


   internal_thread_pool.max_threads=
   "${jgroups.internal_thread_pool.max_threads:20}"
    internal_thread_pool.keep_alive_time="60000"
    internal_thread_pool.queue_enabled="true"
    internal_thread_pool.queue_max_size="500"

    oob_thread_pool.min_threads="${jgroups.oob_thread_pool.min_threads:20}"
    oob_thread_pool.max_threads="${jgroups.oob_thread_pool.max_threads:200}"
    oob_thread_pool.keep_alive_time="60000"
    oob_thread_pool.queue_enabled="false"
  />
  <S3_PING access_key=""
        secret_access_key=""
        location="mybucket"

 />
  <MERGE3 min_interval="10000"
        max_interval="30000"
  />
 <FD_SOCK />
 <FD_ALL timeout="60000"
       interval="15000"
       timeout_check_interval="5000"
 />
  <VERIFY_SUSPECT timeout="5000" />
 <pbcast.NAKACK2 use_mcast_xmit="false"
               xmit_interval="1000"
               xmit_table_num_rows="50"
               xmit_table_msgs_per_row="1024"
               xmit_table_max_compaction_time="30000"
               max_msg_batch_size="100"
               resend_last_seqno="true"
 />
 <UNICAST3 xmit_interval="500"
         xmit_table_num_rows="50"
         xmit_table_msgs_per_row="1024"
         xmit_table_max_compaction_time="30000"
         max_msg_batch_size="100"
         conn_expiry_timeout="0"
 />
 <pbcast.STABLE stability_delay="500"
              desired_avg_gossip="5000"
              max_bytes="1M"
 />
 <pbcast.GMS print_local_addr="false"
           join_timeout="15000"
 />
 <MFC max_credits="2m"
    min_threshold="0.40"
 />
 <FRAG2 />
</config>

这是我的flush-tcp文件: -

<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns="urn:org:jgroups"
    xsi:schemaLocation="urn:org:jgroups 
 http://www.jgroups.org/schema/jgroups.xsd">
 <TCP bind_port="7801"/>
 <S3_PING access_key=""
        secret_access_key=""
        location=""

 />
<MERGE3/>
<FD_SOCK/>
<FD/>
<VERIFY_SUSPECT/>
<pbcast.NAKACK2 use_mcast_xmit="false"/>
<UNICAST3/>
<pbcast.STABLE/>
<pbcast.GMS/>
<MFC/>
<FRAG2/>
<pbcast.STATE_TRANSFER/>
<pbcast.FLUSH timeout="0"/>
</config>

这些是休眠设置:

 propertyMap.put("hibernate.search.default.directory_provider", 
 "infinispan");
 propertyMap.put("hibernate.search.lucene_version", 
 KeywordUtil.LUCENE_4_10_4);
 propertyMap.put("hibernate.search.infinispan.configuration_resourcename",
 "hibernate-search-infinispan-config.xml");
 propertyMap.put("hibernate.search.default.​worker.execution","sync");
 propertyMap.put("hibernate.search.default.​worker.backend","jgroups");
 propertyMap.put("hibernate.search.services.jgroups.configurationFile",
 "flush-tcp.xml");
 propertyMap.put("hibernate.search.default.exclusive_index_use","true");

最初我们使用具有上述配置的一个节点启动集群。取决于负载,我们将节点添加到群集。这是我们的架构。 假设我们在上午10点开始集群。只有节点才会成为主节点。并且一切都很好。 10-10我们在群集中添加了一个节点,稍微改变配置。这是改变

 propertyMap.put("hibernate.search.default.exclusive_index_use","false");

我通过第二个节点创建了索引。然后出现锁定错误。这是错误。

 org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
 org.infinispan.lucene.locking.BaseLuceneLock@46578a74

问题: - 理论上第二个节点应该成为从属节点,它永远不应该获得索引的锁定。它应指示主节点通过jgroups通道创建索引。但它没有发生。你们其中一个人可以帮我解决这个问题。我们的生产系统存在问题。请帮帮我。

1 个答案:

答案 0 :(得分:0)

  

问题: - 理论上第二个节点应该成为奴隶,它应该成为奴隶   永远不会锁定索引。它应该指示主节点   通过jgroups通道创建索引。

这里可能存在两个问题。

1。使用exclusive_index_use

的不同值

也许其他人可以确认,但除非你的新节点只处理一个完全不同的持久单元和完全不同的索引,否则我怀疑在不同的节点上使用exclusive_index_use的不同值是个好主意。

exclusive_index_use不是关于不获取锁,而是关于尽快释放它们(在每个变更集之后)。如果您的其他节点在独占节点中工作,它们将永远不会释放锁定,并且您的新节点将超时等待锁定。

另请注意,禁用exclusive_index_use是降低写入性能的可靠方法,因为它需要不断关闭和打开索引编写器。请谨慎使用。

最后,正如您所指出的,在任何给定时间(JGroups主服务器)只应该有一个节点写入索引,因此在您的情况下不应该需要禁用exclusive_index_use。一定有另一个问题......

2。主/从选举

如果我没记错的话,默认的主/节点选举策略会在您添加新节点时选择新的主节点。此外,我们在最新的Hibernate Search版本(尚未发布)中修复了与动态主选举相关的一些错误,因此您可能会受到其中一个的影响。

您可以尝试在第一个节点上使用jgroupsMaster后端,在第二个节点上使用jgroupsSlave。将不再有任何自动主选举,因此当主节点发生故障时您将失去维护服务的能力,但据我所知,您主要关注的是扩展,因此它可能会为您提供临时解决方案。

在主节点上:

propertyMap.put("hibernate.search.default.​worker.backend","jgroupsMaster");

在从属节点上:

propertyMap.put("hibernate.search.default.​worker.backend","jgroupsSlave");

警告:您需要完全重启!在使用jgroups后端添加另一个节点时,将当前jgroupsSlave后端保留在主服务器上会导致问题!

您可能还需要对Infinispan目录进行一些配置更改,但我不熟悉此目录。您可以查看文档:{​​{3}}