合流Kafka Elasticsearch。 ClassCastException:无法将TextNode强制转换为com.fasterxml.jackson.databind.node.ObjectNode

时间:2018-08-14 11:53:23

标签: elasticsearch apache-kafka apache-kafka-connect confluent

我在发布大型json文档时遇到问题。 我嵌套了但组成正确的JSON(〜1 mb)

发布时,我从Confluent Elasticsearch连接器获取异常:

connect_1    | [2018-08-14 11:42:57,532] ERROR Failed to execute batch 127 of 2 records (io.confluent.connect.elasticsearch.bulk.BulkProcessor)
connect_1    | java.lang.ClassCastException: com.fasterxml.jackson.databind.node.TextNode cannot be cast to com.fasterxml.jackson.databind.node.ObjectNode
connect_1    |  at io.confluent.connect.elasticsearch.BulkIndexingClient.execute(BulkIndexingClient.java:70)
connect_1    |  at io.confluent.connect.elasticsearch.BulkIndexingClient.execute(BulkIndexingClient.java:34)
connect_1    |  at io.confluent.connect.elasticsearch.bulk.BulkProcessor$BulkTask.execute(BulkProcessor.java:348)
connect_1    |  at io.confluent.connect.elasticsearch.bulk.BulkProcessor$BulkTask.call(BulkProcessor.java:326)
connect_1    |  at io.confluent.connect.elasticsearch.bulk.BulkProcessor$BulkTask.call(BulkProcessor.java:312)
connect_1    |  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
connect_1    |  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
connect_1    |  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
connect_1    |  at java.lang.Thread.run(Thread.java:745)
connect_1    | [2018-08-14 11:42:57,537] ERROR Failed to execute batch 126 of 19 records (io.confluent.connect.elasticsearch.bulk.BulkProcessor)
connect_1    | java.lang.ClassCastException: com.fasterxml.jackson.databind.node.TextNode cannot be cast to com.fasterxml.jackson.databind.node.ObjectNode
connect_1    |  at io.confluent.connect.elasticsearch.BulkIndexingClient.execute(BulkIndexingClient.java:70)
connect_1    |  at io.confluent.connect.elasticsearch.BulkIndexingClient.execute(BulkIndexingClient.java:34)
connect_1    |  at io.confluent.connect.elasticsearch.bulk.BulkProcessor$BulkTask.execute(BulkProcessor.java:348)
connect_1    |  at io.confluent.connect.elasticsearch.bulk.BulkProcessor$BulkTask.call(BulkProcessor.java:326)
connect_1    |  at io.confluent.connect.elasticsearch.bulk.BulkProcessor$BulkTask.call(BulkProcessor.java:312)
connect_1    |  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
connect_1    |  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
connect_1    |  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
connect_1    |  at java.lang.Thread.run(Thread.java:745)

以此类推...

从卡夫卡和动物园管理员那里,我没收到任何错误通知。

这是我的设置:

卡夫卡

props.put("bootstrap.servers", ....);
props.put("acks",  "all");
props.put("linger.ms",   1);
props.put("buffer.memory",  33554432);
 props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
 props.put("value.serializer",  "org.apache.kafka.common.serialization.StringSerializer");

连接:

{
  "name": "test-connector-aaa",
  "config": {
    "connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
    "tasks.max": "1",
    "topics": "aaa",
    "topic.index.map": "aaa:test",
    "key.ignore": "false",
    "schema.ignore": "false",
    "connection.url": "http://elastic:9200",
    "type.name": "Iata",
    "name": "elasticsearch-sink",
"read.timeout.ms": "6000",
"connection.timeout.ms": "5000"
  }
}

有什么想法吗? 谢谢!

1 个答案:

答案 0 :(得分:0)

在当前的堆栈跟踪中不应出现此错误。 我已经提到了引发异常的连接器代码。请分享kafka-connect-elasticsearch版本

  final ObjectNode parsedError = (ObjectNode) OBJECT_MAPPER.readTree(item.error);