Flink 1.5.0 ElasticSearch连接器page具有以下代码
Map<String, String> config = new HashMap<>();
config.put("cluster.name", "my-cluster-name");
// This instructs the sink to emit after every element, otherwise they would be buffered
config.put("bulk.flush.max.actions", "1");
似乎不建议使用或删除bulk.flush.max.actions。程序启动后,我得到以下异常。
这是我的确切代码(this file的略微修改版本)
@throws[Exception]
override def open(parameters: Configuration) {
val config = new util.HashMap[String, String]
config.put("bulk.flush.max.size.mb", "1")
config.put("cluster.name", cluster)
val settings = Settings.builder()
.put(config)
.build()
client = new PreBuiltTransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(host), port))
}
这是我得到的例外情况
Exception in thread "main" org.apache.flink.runtime.client.JobExecutionException: java.lang.IllegalArgumentException: unknown setting [bulk.flush.max.size.mb] please check that any required plugins are installed, or check the breaking changes documentation for removed settings
at org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:625)
at org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:121)
at org.apache.flink.streaming.api.scala.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.scala:654)
at org.myorg.quickstart.StreamingKafkaClient$.main(StreamingKafkaClient.scala:63)
at org.myorg.quickstart.StreamingKafkaClient.main(StreamingKafkaClient.scala)
此外,似乎Fink连接器不适用于ES 6.x,我必须卸载并移回ES 5.x才能使Flink与ES连接正常工作。我想使用bulk.flush.max.actions,因为我怀疑事件会在一段时间后被缓冲并且不会被推送到ES(或者可能正在发生其他问题,但这超出了此问题的范围)。>
我正在使用Flink 1.5.0,来自pom.xml的相关摘录如下
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-elasticsearch5_2.11</artifactId>
<version>1.5.0</version>
</dependency>