弹性搜索:得到java.lang.IllegalArgumentException:传递的对象数必须是偶数但是[1]

时间:2016-09-27 05:15:11

标签: java elasticsearch

ElasticSearch版本 - 2.4.0

日志:

java.lang.IllegalArgumentException: The number of object passed must be even but was [1]
        at org.elasticsearch.action.index.IndexRequest.source(IndexRequest.java:451)
        at org.elasticsearch.action.index.IndexRequestBuilder.setSource(IndexRequestBuilder.java:186)
        at org.apache.kafka.connect.elasticsearchschema.ElasticsearchSinkTask.put(ElasticsearchSinkTask.java:138)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:381)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:227)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:170)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:142)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)

代码:

// This method will put the SinkRecords which are sent in bulk to Elastic Search with proper index and type. 
public void put(Collection<SinkRecord> sinkRecords) {
    try {
      // Gets a list of SinkRecord from Kafka broker. 
      List<SinkRecord> records = new ArrayList<SinkRecord>(sinkRecords);
      for (int i = 0; i < records.size(); i++) {
        BulkRequestBuilder bulkRequest = client.prepareBulk();
        // Looping through the SinkRecords and the size should be less than bulksize. 
        for (int j = 0; j < bulkSize && i < records.size(); j++, i++) {
          SinkRecord record = records.get(i);
          // Index and type is hardcoded and record.value() contains the Json message.
          bulkRequest.add(client.prepareIndex("operative1", "test").setSource(record.value()));
        }
        i--;
        // Executing bulk requests.
        BulkResponse bulkResponse = bulkRequest.execute().actionGet();
       }
    } catch (Exception e) {
    }
  }

给出的输入是 - &gt; { "id1": "file", "value1": "File" }

请帮助解决这个问题。

1 个答案:

答案 0 :(得分:0)

最终代码通过传递Map来实现。

// This method will put the SinkRecords which are sent in bulk to Elastic Search with proper index and type. 
public void put(Collection<SinkRecord> sinkRecords) {
    try {
      ObjectMapper mapper = new ObjectMapper();
      // Gets a list of SinkRecord from Kafka broker. 
      List<SinkRecord> records = new ArrayList<SinkRecord>(sinkRecords);
      for (int i = 0; i < records.size(); i++) {
        BulkRequestBuilder bulkRequest = client.prepareBulk();
        // Looping through the SinkRecords and the size should be less than bulksize. 
        for (int j = 0; j < bulkSize && i < records.size(); j++, i++) {
          SinkRecord record = records.get(i);
          // Index and type is hardcoded and record.value() contains the Json message.
           Map<String, Object> map = mapper.readValue((String) record.value(), new TypeReference<Map<String, Object>>() {
          });
          bulkRequest.add(client.prepareIndex("operative1", "test").setSource(map));
        }
        i--;
        // Executing bulk requests.
        BulkResponse bulkResponse = bulkRequest.execute().actionGet();
       }
    } catch (Exception e) {
    }
  }