无效的HTTP主机:Kafka ElasticSearch接收器连接器

时间:2020-09-23 14:43:33

标签: elasticsearch apache-kafka apache-kafka-connect

我试图通过在两者之间使用kafka connect将elasticsearch用作我的应用程序的数据库。 KafkaConnect,elasticsearch(版本7)和我的应用程序在同一网络中的容器中运行。当我尝试从容器中为kafkaconnect访问elasticsearch时,它成功了。但是我的连接器不断向我抛出以下错误,而且我似乎无法找出确切的问题:

错误:

container_standalone    | java.lang.IllegalArgumentException: Invalid HTTP host: elasticsearch:9200/
container_standalone    |   at org.apache.http.HttpHost.create(HttpHost.java:123)
container_standalone    |   at io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.lambda$getClientConfig$0(JestElasticsearchClient.java:201)
container_standalone    |   at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
container_standalone    |   at java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
container_standalone    |   at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
container_standalone    |   at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
container_standalone    |   at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
container_standalone    |   at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
container_standalone    |   at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
container_standalone    |   at io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.getClientConfig(JestElasticsearchClient.java:201)
container_standalone    |   at io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.<init>(JestElasticsearchClient.java:149)
container_standalone    |   at io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.<init>(JestElasticsearchClient.java:142)
container_standalone    |   at io.confluent.connect.elasticsearch.ElasticsearchSinkTask.start(ElasticsearchSinkTask.java:122)
container_standalone    |   at io.confluent.connect.elasticsearch.ElasticsearchSinkTask.start(ElasticsearchSinkTask.java:51)
container_standalone    |   at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:300)
container_standalone    |   at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:189)
container_standalone    |   at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
container_standalone    |   at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
container_standalone    |   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
container_standalone    |   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
container_standalone    |   at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
container_standalone    |   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
container_standalone    |   at java.lang.Thread.run(Thread.java:748)
container_standalone    | [2020-09-23 09:47:09,366] ERROR WorkerSinkTask{id=elasticsearch-sink-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:180)

配置文件:

name=elasticsearch-sink
connector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnector
tasks.max=1
topics=vehicle
topic.index=test-vehicle
connection.url=http://elasticsearch:9200
connection.username=username
connection.password=password
type.name=log
key.ignore=true
schema.ignore=true

Elastic.yml文件:

cluster.name: "docker-cluster"
network.host: 0.0.0.0
xpack.license.self_generated.type: trial
xpack.security.enabled: false
xpack.monitoring.collection.enabled: true

0 个答案:

没有答案