java.lang.IllegalArgumentException:无法解析查询:{" query":

时间:2018-01-22 20:06:03

标签: scala apache-spark elasticsearch

我正在尝试在Spark 2.2和Scala 2.11.8中执行Elasticsearch DSL查询。 Elasticsearch的版本如果4. *。 这是我在Spark中使用的库:

<dependency>
    <groupId>org.elasticsearch</groupId>
    <artifactId>elasticsearch-spark-20_2.11</artifactId>
    <version>5.2.2</version>
</dependency>

这是我的代码:

val spark = SparkSession.builder()
      .config("es.nodes","localhost")
      .config("es.port",9200)
      .config("es.nodes.wan.only","true")
      .config("es.index.auto.create","true")
      .appName("ES test")
      .master("local[*]")
      .getOrCreate()

    val myquery = """{"query":
                          {"bool": {
                             "must": [
                                {
                                   "has_child": {
                                       "filter": {
                                          ...
                                       }
                                    }
                                }
                             ]
                          }
                      }}"""


   val df = spark.read.format("org.elasticsearch.spark.sql")
      .option("query", myquery)
      .option("pushdown", "true")
      .load("myindex/items")

我提供了DSL查询的主要语料库。我收到了错误:

  

java.lang.IllegalArgumentException:无法解析查询:{&#34; query&#34;:

最初,我认为问题在于Elasticsearch的版本。据我所知GitHub,不支持Elasticsearch的第4版。

但是,如果我使用简单查询运行相同的代码,它会正确地从Elasticsearch检索记录。

var df = spark.read
              .format("org.elasticsearch.spark.sql")
              .option("es.query", "?q=public:*")
              .load("myindex/items")

因此,我认为该问题与版本无关,但它与我代表查询的方式相关。

此查询适用于cURL,但在将其传递给Spark之前可能会以某种方式更新?

完整错误堆栈跟踪:

    Previous exception in task: Failed to parse query: {"query":
                         {"bool": {
                           "must": [
                             {
                               "has_child": {
                                 "filter": {
                                   "bool": {
                                     "must": [
                                       {
                                         "term": {
                                           "project": 579
                                         }
                                       },
                                       {
                                         "terms": {
                                           "status": [
                                             0,
                                             1,
                                             2
                                           ]
                                         }
                                       }
                                     ]
                                   }
                                 },
                                 "type": "status"
                               }
                             },
                             {
                               "has_child": {
                                 "filter": {
                                   "bool": {
                                     "must": [
                                       {
                                         "term": {
                                           "project": 579
                                         }
                                       },
                                       {
                                         "terms": {
                                           "entity": [
                                             4634
                                           ]
                                         }
                                       }
                                     ]
                                   }
                                 },
                                 "type": "annotation"
                               }
                             },
                             {
                               "term": {
                                 "project": 579
                               }
                             },
                             {
                               "range": {
                                 "publication_date": {
                                   "gte": "2017/01/01",
                                   "lte": "2017/04/01",
                                   "format": "yyyy/MM/dd"
                                 }
                               }
                             },
                             {
                               "bool": {
                                 "should": [
                                   {
                                     "terms": {
                                       "typology": [
                                         "news",
                                         "blog",
                                         "forum",
                                         "socialnetwork"
                                       ]
                                     }
                                   },
                                   {
                                     "terms": {
                                       "publishing_platform": [
                                         "twitter"
                                       ]
                                     }
                                   }
                                 ]
                               }
                             }
                           ]
                         }}
    org.elasticsearch.hadoop.rest.query.QueryUtils.parseQuery(QueryUtils.java:59)
    org.elasticsearch.hadoop.rest.RestService.createReader(RestService.java:417)
    org.elasticsearch.spark.rdd.AbstractEsRDDIterator.reader$lzycompute(AbstractEsRDDIterator.scala:49)
    org.elasticsearch.spark.rdd.AbstractEsRDDIterator.reader(AbstractEsRDDIterator.scala:42)
    org.elasticsearch.spark.rdd.AbstractEsRDDIterator.hasNext(AbstractEsRDDIterator.scala:61)
    scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
    org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
    org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:395)
    org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:234)
    org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:228)
    org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
    org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
    org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    org.apache.spark.scheduler.Task.run(Task.scala:108)
    org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
    java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    java.lang.Thread.run(Thread.java:745)
    at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
    at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:116)
    at org.apache.spark.scheduler.Task.run(Task.scala:118)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
18/01/22 21:43:12 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): org.apache.spark.util.TaskCompletionListenerException: Failed to parse query: {"query":

和这一个:

8/01/22 21:47:40 WARN ScalaRowValueReader: Field 'cluster' is backed by an array but the associated Spark Schema does not reflect this;
              (use es.read.field.as.array.include/exclude) 
18/01/22 21:47:40 WARN ScalaRowValueReader: Field 'project' is backed by an array but the associated Spark Schema does not reflect this;
              (use es.read.field.as.array.include/exclude) 
18/01/22 21:47:40 WARN ScalaRowValueReader: Field 'client' is backed by an array but the associated Spark Schema does not reflect this;
              (use es.read.field.as.array.include/exclude) 

18/01/22 21:47:40 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
scala.MatchError: Buffer(13473953) (of class scala.collection.convert.Wrappers$JListWrapper)
    at org.apache.spark.sql.catalyst.CatalystTypeConverters$StringConverter$.toCatalystImpl(CatalystTypeConverters.scala:276)
    at org.apache.spark.sql.catalyst.CatalystTypeConverters$StringConverter$.toCatalystImpl(CatalystTypeConverters.scala:275)
    at org.apache.spark.sql.catalyst.CatalystTypeConverters$CatalystTypeConverter.toCatalyst(CatalystTypeConverters.scala:103)
    at org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$2.apply(CatalystTypeConverters.scala:379)
    at org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$3.apply(ExistingRDD.scala:61)
    at org.apache.spark.sql.execution.RDDConversions$$anonfun$rowToRowRdd$1$$anonfun$apply$3.apply(ExistingRDD.scala:58)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:395)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:234)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:228)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:108)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
18/01/22 21:47:40 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): scala.MatchError: Buffer(13473953) (of class scala.collection.convert.Wrappers$JListWrapper)

1 个答案:

答案 0 :(得分:1)

所以错误说

Caused by: org.codehaus.jackson.JsonParseException: Unexpected character (':' (code 58)):
 at [Source: java.io.StringReader@76aeea7a; line: 2, column: 33]

如果你看看query

val myquery = """{"query":
                      "bool": {

您会发现这可能会在:之后映射到"bool",显然您所拥有的是无效的JSON。为了说清楚,我只是将其重新格式化为

{"query": "bool": { ...

很可能在&#34;查询&#34;之后忘了{可能最后匹配}。将此与official docs上的示例进行比较。

{
  "query": {
    "bool" : {
      "must" : {