使用Elasticsearch连接器的Spark Streaming抛出JVM_Bind错误

时间:2017-05-15 14:41:37

标签: java apache-spark spark-streaming

我在Java中使用 Spark 2.1.1 elasticsearch-spark-20_2.11 (版本5.3.2),以便在 Elasticsearch中写入数据 .I创建 JavaStreamingContext 然后我将其设置为等待终止,因此应用程序应始终检索新数据。

在我读完流之后,我将其拆分为RDD,并为每个应用SQL聚合,然后将其写入Elasticsearch,如下所示:

        recordStream.foreachRDD(rdd -> {
            if (rdd.count() > 0) {
                /*
                 * Create RDD from JSON
                 */
                Dataset<Row> df = spark.read().json(rdd.rdd());
                df.createOrReplaceTempView("data");
                df.cache();
                /*
                 * Apply the aggregations
                 */
                Dataset aggregators = spark.sql(ORDER_TYPE_DB);
                JavaEsSparkSQL.saveToEs(aggregators.toDF(), "order_analytics/record");
                aggregators = spark.sql(ORDER_CUSTOMER_DB);
                JavaEsSparkSQL.saveToEs(aggregators.toDF(), "customer_analytics/record");
            }
        });

第一次读取数据并将其插入 Elasticsearch 时,这很好用,但是当流检索到更多数据时,我收到以下错误:

org.elasticsearch.hadoop.EsHadoopIllegalArgumentException: Cannot detect ES version - typically this happens if the network/Elasticsearch cluster is not accessible or when targeting a WAN/Cloud instance without the proper setting 'es.nodes.wan.only'
    at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:250)
    at org.elasticsearch.hadoop.rest.RestService.createWriter(RestService.java:546)
    at org.elasticsearch.spark.rdd.EsRDDWriter.write(EsRDDWriter.scala:58)
    at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
    at org.elasticsearch.spark.sql.EsSparkSQL$$anonfun$saveToEs$1.apply(EsSparkSQL.scala:94)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.hadoop.rest.EsHadoopTransportException: java.net.BindException: Address already in use: JVM_Bind
    at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:129)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:461)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:425)
    at org.elasticsearch.hadoop.rest.RestClient.execute(RestClient.java:429)
    at org.elasticsearch.hadoop.rest.RestClient.get(RestClient.java:155)
    at org.elasticsearch.hadoop.rest.RestClient.remoteEsVersion(RestClient.java:627)
    at org.elasticsearch.hadoop.rest.InitializationUtils.discoverEsVersion(InitializationUtils.java:243)
    ... 10 more
Caused by: java.net.BindException: Address already in use: JVM_Bind
    at java.net.DualStackPlainSocketImpl.bind0(Native Method)
    at java.net.DualStackPlainSocketImpl.socketBind(DualStackPlainSocketImpl.java:106)
    at java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:387)
    at java.net.PlainSocketImpl.bind(PlainSocketImpl.java:190)
    at java.net.Socket.bind(Socket.java:644)
    at java.net.Socket.<init>(Socket.java:433)
    at java.net.Socket.<init>(Socket.java:286)
    at org.apache.commons.httpclient.protocol.DefaultProtocolSocketFactory.createSocket(DefaultProtocolSocketFactory.java:80)
    at org.apache.commons.httpclient.protocol.DefaultProtocolSocketFactory.createSocket(DefaultProtocolSocketFactory.java:122)
    at org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:707)
    at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:387)
    at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
    at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
    at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
    at org.elasticsearch.hadoop.rest.commonshttp.CommonsHttpTransport.execute(CommonsHttpTransport.java:478)
    at org.elasticsearch.hadoop.rest.NetworkClient.execute(NetworkClient.java:112)
    ... 16 more

任何想法可能是什么问题?

Spark使用默认配置,并在Java中实例化为

SparkConf conf = new SparkConf().setAppName(topic).setMaster("local");
JavaStreamingContext streamingContext = new JavaStreamingContext(conf, Durations.seconds(2));

Elasticsearch通过 Docker compose 配置以下环境参数:

    - cluster.name=cp-es-cluster
    - node.name=cloud1
    - http.cors.enabled=true
    - http.cors.allow-origin="*"
    - network.host=0.0.0.0
    - discovery.zen.ping.unicast.hosts=${ENV_IP}
    - network.publish_host=${ENV_IP}
    - discovery.zen.minimum_master_nodes=1
    - xpack.security.enabled=false
    - xpack.monitoring.enabled=false

0 个答案:

没有答案