Solr 6.0.0 - SolrCloud java示例

时间:2016-06-23 04:58:26

标签: solr

我在本地主机上安装了solr。

我使用嵌入式zookeepr开始标准solr云示例

收集:开始 碎片:2 复制:2

500条记录/文件处理时间耗时115秒[localhost tetsing] - 为什么这需要花费很多时间来处理500条记录。 有没有办法将其提高到几毫秒/纳秒

注:

我在远程机器solr实例上测试过相同的东西,localhost在远程solr上有数据索引[在java中注释]

我用Ensemble和单个zookeepr开始我的solr myCloudData集合

2个solr节点, 1 Ensemble zookeeper standalone

collection:myCloudData, 碎片:2, 复制:2

Solr colud java code

package com.test.solr.basic;

import java.io.IOException;
import java.util.concurrent.TimeUnit;

import org.apache.solr.client.solrj.SolrClient;
import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.impl.CloudSolrClient;
import org.apache.solr.client.solrj.impl.HttpSolrClient;
import org.apache.solr.common.SolrInputDocument;

 public class SolrjPopulatorCloudClient2 {
  public static void main(String[] args) throws     IOException,SolrServerException {           


    //String zkHosts = "64.101.49.57:2181/solr";
    String zkHosts = "localhost:9983";
    CloudSolrClient solrCloudClient = new CloudSolrClient(zkHosts, true);
    //solrCloudClient.setDefaultCollection("myCloudData");
    solrCloudClient.setDefaultCollection("gettingstarted");
    /*
    // Thread Safe
    solrClient = new ConcurrentUpdateSolrClient(urlString, queueSize, threadCount);
    */
    // Depreciated - client
    //HttpSolrServer server = new HttpSolrServer("http://localhost:8983/solr");
    long start = System.nanoTime();
    for (int i = 0; i < 500; ++i) {
        SolrInputDocument doc = new SolrInputDocument();
        doc.addField("cat", "book");
        doc.addField("id", "book-" + i);
        doc.addField("name", "The Legend of the Hobbit part " + i);
        solrCloudClient.add(doc);
        if (i % 100 == 0)
            System.out.println(" Every 100 records flush it");
        solrCloudClient.commit(); // periodically flush
    }
    solrCloudClient.commit(); 
    solrCloudClient.close();
    long end = System.nanoTime();
    long seconds = TimeUnit.NANOSECONDS.toSeconds(end - start);
    System.out.println(" All records are indexed, took " + seconds + " seconds");

 }
}

1 个答案:

答案 0 :(得分:3)

您正在提交每个新文档,这是不必要的。如果您将if (i % 100 == 0)块更改为

,它将运行得更快
if (i % 100 == 0) {
    System.out.println(" Every 100 records flush it");
    solrCloudClient.commit(); // periodically flush
}

在我的机器上,这将在14秒内为您的500条记录编制索引。如果我从commit()循环中删除for调用,则会在7秒后进行索引。

或者,您可以在commitWithinMs来电中添加solrCloudClient.add()参数:

solrCloudClient.add(doc, 15000);

这将保证您的记录在15秒内提交,并且还可以提高您的索引速度。