我们正在使用solr cloud 6.3,其中16台服务器设置了5个Zookeeper节点仲裁。这是配置统计
solr version 6.3
lucene 6.3
java version 8
Processors 32
RAM 32 GB
SSD drives worth 100 GB
在这16台机器上设置了2个收藏品。 Solr本身正在处理路由文档,我们没有使用自定义分片。 在16台机器中的4台机器上,有一个集合,集合" A ",它有110万个文件,分为2个分片。每个分片都有两个副本,每台机器上都有一个核心。 在16台机器中的12台机器上,还有另一个集合" B ",它有900万个文件,分为3个分片。每个分片有4个副本,每台机器上都有单核。
收集" A "这是solrconfig
<luceneMatchVersion>6.3.0</luceneMatchVersion>
<directoryFactory name="DirectoryFactory"
class="${solr.directoryFactory:solr.MMapDirectoryFactory}"/>
<indexConfig>
<useCompoundFile>false</useCompoundFile>
<maxIndexingThreads>10</maxIndexingThreads>
<mergePolicyFactory class="org.apache.solr.index.TieredMergePolicyFactory">
<int name="maxMergeAtOnce">2147483647</int>
<int name="segmentsPerTier">2</int>
</mergePolicyFactory>
<ramBufferSizeMB>1000</ramBufferSizeMB>
<maxBufferedDocs>1000</maxBufferedDocs>
<filter class="solr.LimitTokenCountFilterFactory" maxTokenCount="10000"/>
<writeLockTimeout>1000</writeLockTimeout>
<commitLockTimeout>10000</commitLockTimeout>
<lockType>native</lockType>
</indexConfig>
<updateHandler class="solr.DirectUpdateHandler2">
<maxPendingDeletes>100000</maxPendingDeletes>
<updateLog>
<int name="numRecordsToKeep">200</int>
<int name="maxNumLogsToKeep">5</int>
</updateLog>
<autoCommit>
<maxDocs>500</maxDocs>
<maxTime>120000</maxTime>
<openSearcher>false</openSearcher>
</autoCommit>
<autoSoftCommit>
<maxDocs>300</maxDocs>
<maxTime>60000</maxTime>
</autoSoftCommit>
</updateHandler>
对于收集&#34; B ,这是solrconfig
<luceneMatchVersion>6.3.0</luceneMatchVersion>
<directoryFactory name="DirectoryFactory"
class="${solr.directoryFactory:solr.MMapDirectoryFactory}"/>
<codecFactory class="solr.SchemaCodecFactory"/>
<indexConfig>
<useCompoundFile>false</useCompoundFile>
<maxIndexingThreads>10</maxIndexingThreads>
<ramBufferSizeMB>1000</ramBufferSizeMB>
<maxBufferedDocs>1000</maxBufferedDocs>
<filter class="solr.LimitTokenCountFilterFactory" maxTokenCount="10000"/>
<writeLockTimeout>5000</writeLockTimeout>
<commitLockTimeout>10000</commitLockTimeout>
<lockType>native</lockType>
<mergePolicyFactory class="org.apache.solr.index.TieredMergePolicyFactory">
<int name="maxMergeAtOnce">6</int>
<int name="segmentsPerTier">6</int>
</mergePolicyFactory>
</indexConfig>
<updateHandler class="solr.DirectUpdateHandler2">
<maxPendingDeletes>100000</maxPendingDeletes>
<updateLog>
<int name="numRecordsToKeep">200</int>
<int name="maxNumLogsToKeep">5</int>
</updateLog>
<autoCommit>
<maxDocs>500</maxDocs>
<maxTime>120000</maxTime>
<openSearcher>false</openSearcher>
</autoCommit>
<autoSoftCommit>
<maxDocs>300</maxDocs>
<maxTime>60000</maxTime>
</autoSoftCommit>
</updateHandler>
我们面临以下问题:
我们在此群集上每分钟加载20k个请求。 6k请求收集 A 和14k请求核心 B 。
更新: Heap usage graph