Spark RDD没有从Elasticsearch中获取所有源字段

时间:2016-03-04 07:16:12

标签: scala elasticsearch apache-spark rdd

我在Elasticseach(本地单节点服务器)中有以下数据

搜索命令curl -XPOST 'localhost:9200/sparkdemo/_search?pretty' -d '{ "query": { "match_all": {} } }'

输出:

{
  "took" : 4,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "failed" : 0
  },
  "hits" : {
    "total" : 10,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : "sparkdemo",
      "_type" : "hrinfo",
      "_id" : "AVNAY_H0lYe0cQl--Bin",
      "_score" : 1.0,
      "_source" : {
        "date" : "9/Mar/2016",
        "pid" : "1",
        "propName" : "HEARTRATE",
        "var" : null,
        "propValue" : 86,
        "avg" : 86,
        "stage" : "S1"
      }
    }, {
      "_index" : "sparkdemo",
      "_type" : "hrinfo",
      "_id" : "AVNAY_KklYe0cQl--Bir",
      "_score" : 1.0,
      "_source" : {
        "date" : "13/Mar/2016",
        "pid" : "1",
        "propName" : "HEARTRATE",
        "var" : null,
        "propValue" : 86,
        "avg" : 87,
        "stage" : "S1"
      }
    }, {
      "_index" : "sparkdemo",
      "_type" : "hrinfo",
      "_id" : "AVNAY-TolYe0cQl--Bii",
      "_score" : 1.0,
      "_source" : {
        "date" : "4/Mar/2016",
        "pid" : "1",
        "propName" : "HEARTRATE",
        "var" : null,
        "propValue" : 82,
        "avg" : 82,
        "stage" : "S0"
      }
    }, 
.......
... Few more records
..........
    }, {
      "_index" : "sparkdemo",
      "_type" : "hrinfo",
      "_id" : "AVNAY_KklYe0cQl--Biq",
      "_score" : 1.0,
      "_source" : {
        "date" : "12/Mar/2016",
        "pid" : "1",
        "propName" : "HEARTRATE",
        "var" : null,
        "propValue" : 91,
        "avg" : 89,
        "stage" : "S1"
      }
    } ]
  }
}

我正在尝试获取Spark程序中的所有数据(从eclipse运行的本地独立程序)。

import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.elasticsearch.spark._
import scala.collection.mutable.Map;

object Test1 {

  def main(args: Array[String]) {
    val conf = new SparkConf().setMaster("local[2]").setAppName("HRInfo");
    val sc = new SparkContext(conf);

    val esRdd = sc.esRDD("sparkdemo/hrinfo", "?q=*");

    val searchResultRDD = esRdd.map(t => {
      println("id:" + t._1 + ", map:" + t._2);
      t._2;
    });

    val infoRDD = searchResultRDD.collect().foreach(map => {
      var stage = map.get("stage");
      var pid = map.get("pid");
      var date = map.get("date");
      var propName = map.get("propName");
      var propValue = map.get("propValue");
      var avg = map.get("avg");
      var variation = map.get("var");

      println("Info(" + stage + "," + pid + "," + date + "," + propName + "," + propValue + "," + avg + "," + variation + ")");

    });

  }
}

但是程序没有获取存储在ElasticSearch中的所有记录文件。

程序输出:

id:AVNAY_H0lYe0cQl--Bin, map:Map(date -> 9/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY_KklYe0cQl--Bir, map:Map(date -> 13/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY-TolYe0cQl--Bii, map:Map(date -> 4/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY_H0lYe0cQl--Bio, map:Map(date -> 10/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY_KklYe0cQl--Bip, map:Map(date -> 11/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY-TolYe0cQl--Bij, map:Map(date -> 5/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY-Y9lYe0cQl--Bil, map:Map(date -> 7/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY-Y9lYe0cQl--Bim, map:Map(date -> 8/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY-Y9lYe0cQl--Bik, map:Map(date -> 6/Mar/2016, pid -> 1, propName -> HEARTRATE)
id:AVNAY_KklYe0cQl--Biq, map:Map(date -> 12/Mar/2016, pid -> 1, propName -> HEARTRATE)
Info(None,Some(1),Some(9/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(13/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(4/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(10/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(11/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(5/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(7/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(8/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(6/Mar/2016),Some(HEARTRATE),None,None,None)
Info(None,Some(1),Some(12/Mar/2016),Some(HEARTRATE),None,None,None)

程序获取所有记录但在每个记录中没有获取其他字段(即stage,propValue,avg和variabtion)为什么? 谢谢你。

2 个答案:

答案 0 :(得分:1)

由于文档中的"var": null值,会发生这种情况。每个文档中的"var": null和以下所有值都不会进入Scala中的地图。

您可以通过使用非空值(例如"var": null)替换其中一个"var": "test"值来显示此信息。然后,您将获得正确返回的所有值,如您所料。或者,您可以在文档的开头放置一个空值。例如

curl -X POST 'http://localhost:9200/sparkdemo/hrinfo/5' -d '{"test":null,"date": "9/Mar/2016","pid": "1","propName": "HEARTRATE","propValue": 86,"avg": 86,"stage": "S1"}'

并且该文档的地图将为空:

id:5, map:Map()

答案 1 :(得分:0)

试试这个:

.htaccess