pyspark - 将dstream写入elasticsearch

时间:2017-07-25 14:06:19

标签: elasticsearch pyspark spark-streaming dstream

我遇到了从spark spark(pyspark)到elasticserach索引数据的问题。数据类型为dstream。下面看起来如何

(u'01B', 0)
(u'1A5', 1)
....

这是我正在使用的弹性索引:index = clus和type = data

GET /clus/_mapping/data
{
   "clus": {
      "mappings": {
         "data": {
            "properties": {
               "content": {
                  "type": "text"
               }
            }
         }
      }
   }
}

这是我的代码:

ES_HOST = {
    "host" : "localhost", 
    "port" : 9200
}

INDEX_NAME = 'clus'
TYPE_NAME = 'data'
ID_FIELD = 'responseID' 

# create ES client
es = Elasticsearch(hosts = [ES_HOST])

# some config before sending to elastic     
if not es.indices.exists(INDEX_NAME):
    request_body = {
        "settings" : {
            "number_of_shards": 1,
            "number_of_replicas": 0
        }
    }
    res = es.indices.create(index = INDEX_NAME, body = request_body)
es_write_conf = {
        "es.nodes": "localhost",
        "es.port": "9200",
        "es.resource": INDEX_NAME+"/"+TYPE_NAME
    }
sc = SparkContext(appName="PythonStreamingKafka")
    ssc = StreamingContext(sc, 30)

# .....
#loading data to put in elastic : lines4

    lines4.foreachRDD(lambda rdd: rdd.saveAsNewAPIHadoopFile(
        path='-',
        outputFormatClass="org.elasticsearch.hadoop.mr.EsOutputFormat",
        keyClass="org.apache.hadoop.io.NullWritable",
        valueClass="org.elasticsearch.hadoop.mr.LinkedMapWritable",
        conf=es_write_conf))




    ssc.start()
    ssc.awaitTermination()

这是错误:

  

17/07/25 15:31:31错误执行者:阶段11.0中的任务2.0中的异常   (TID 23)org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest:找到了   不可恢复的错误[127.0.0.1:9200]返回错误请求(400) -   无法解析;纾困..在   org.elasticsearch.hadoop.rest.RestClient.processBulkResponse(RestClient.java:251)     在org.elasticsearch.hadoop.rest.RestClient.bulk(RestClient.java:203)     在   org.elasticsearch.hadoop.rest.RestRepository.tryFlush(RestRepository.java:220)     在   org.elasticsearch.hadoop.rest.RestRepository.flush(RestRepository.java:242)     在   org.elasticsearch.hadoop.rest.RestRepository.close(RestRepository.java:267)     在   org.elasticsearch.hadoop.mr.EsOutputFormat $ EsRecordWriter.doClose(EsOutputFormat.java:214)     在   org.elasticsearch.hadoop.mr.EsOutputFormat $ EsRecordWriter.close(EsOutputFormat.java:196)     在   org.apache.spark.rdd.PairRDDFunctions $$ anonfun $ saveAsNewAPIHadoopDataset $ 1 $$ anonfun $ 12个$$ anonfun $ $应用$ 5.apply MCV $ SP(PairRDDFunctions.scala:1119)     在   org.apache.spark.util.Utils $ .tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1295)     在   org.apache.spark.rdd.PairRDDFunctions $$ anonfun $ saveAsNewAPIHadoopDataset $ 1 $$ anonfun $ 12.apply(PairRDDFunctions.scala:1119)     在   org.apache.spark.rdd.PairRDDFunctions $$ anonfun $ saveAsNewAPIHadoopDataset $ 1 $$ anonfun $ 12.apply(PairRDDFunctions.scala:1091)     在org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)     在org.apache.spark.scheduler.Task.run(Task.scala:89)at   org.apache.spark.executor.Executor $ TaskRunner.run(Executor.scala:227)     在   java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)     在   java.util.concurrent.ThreadPoolExecutor中的$ Worker.run(ThreadPoolExecutor.java:617)     在java.lang.Thread.run(Thread.java:748)17/07/25 15:31:31错误   执行者:阶段11.0中的任务0.0中的异常(TID 21)   org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest:找到了   不可恢复的错误[127.0.0.1:9200]返回错误请求(400) -   无法解析;纾困..在   org.elasticsearch.hadoop.rest.RestClient.processBulkResponse(RestClient.java:251)     在org.elasticsearch.hadoop.rest.RestClient.bulk(RestClient.java:203)     在   org.elasticsearch.hadoop.rest.RestRepository.tryFlush(RestRepository.java:220)     在   org.elasticsearch.hadoop.rest.RestRepository.flush(RestRepository.java:242)     在   org.elasticsearch.hadoop.rest.RestRepository.close(RestRepository.java:267)     在   org.elasticsearch.hadoop.mr.EsOutputFormat $ EsRecordWriter.doClose(EsOutputFormat.java:214)     在   org.elasticsearch.hadoop.mr.EsOutputFormat $ EsRecordWriter.close(EsOutputFormat.java:196)     在   org.apache.spark.rdd.PairRDDFunctions $$ anonfun $ saveAsNewAPIHadoopDataset $ 1 $$ anonfun $ 12个$$ anonfun $ $应用$ 5.apply MCV $ SP(PairRDDFunctions.scala:1119)     在   org.apache.spark.util.Utils $ .tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1295)     在   org.apache.spark.rdd.PairRDDFunctions $$ anonfun $ saveAsNewAPIHadoopDataset $ 1 $$ anonfun $ 12.apply(PairRDDFunctions.scala:1119)     在   org.apache.spark.rdd.PairRDDFunctions $$ anonfun $ saveAsNewAPIHadoopDataset $ 1 $$ anonfun $ 12.apply(PairRDDFunctions.scala:1091)     在org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)     在org.apache.spark.scheduler.Task.run(Task.scala:89)at   org.apache.spark.executor.Executor $ TaskRunner.run(Executor.scala:227)     在   java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)     在   java.util.concurrent.ThreadPoolExecutor中的$ Worker.run(ThreadPoolExecutor.java:617)     在java.lang.Thread.run(Thread.java:748)17/07/25 15:31:31错误   执行程序:阶段11.0中的任务1.0中的异常(TID 22)   org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest:找到了   不可恢复的错误[127.0.0.1:9200]返回错误请求(400) -   无法解析;纾困..在   org.elasticsearch.hadoop.rest.RestClient.processBulkResponse(RestClient.java:251)     在org.elasticsearch.hadoop.rest.RestClient.bulk(RestClient.java:203)     在   org.elasticsearch.hadoop.rest.RestRepository.tryFlush(RestRepository.java:220)     在   org.elasticsearch.hadoop.rest.RestRepository.flush(RestRepository.java:242)     在   org.elasticsearch.hadoop.rest.RestRepository.close(RestRepository.java:267)     在   org.elasticsearch.hadoop.mr.EsOutputFormat $ EsRecordWriter.doClose(EsOutputFormat.java:214)     在   org.elasticsearch.hadoop.mr.EsOutputFormat $ EsRecordWriter.close(EsOutputFormat.java:196)     在   org.apache.spark.rdd.PairRDDFunctions $$ anonfun $ saveAsNewAPIHadoopDataset $ 1 $$ anonfun $ 12个$$ anonfun $ $应用$ 5.apply MCV $ SP(PairRDDFunctions.scala:1119)     在   org.apache.spark.util.Utils $ .tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1295)     在   org.apache.spark.rdd.PairRDDFunctions $$ anonfun $ saveAsNewAPIHadoopDataset $ 1 $$ anonfun $ 12.apply(PairRDDFunctions.scala:1119)     在   org.apache.spark.rdd.PairRDDFunctions $$ anonfun $ saveAsNewAPIHadoopDataset $ 1 $$ anonfun $ 12.apply(PairRDDFunctions.scala:1091)     在org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)     在org.apache.spark.scheduler.Task.run(Task.scala:89)at   org.apache.spark.executor.Executor $ TaskRunner.run(Executor.scala:227)     在   java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)     在   java.util.concurrent.ThreadPoolExecutor中的$ Worker.run(ThreadPoolExecutor.java:617)     在java.lang.Thread.run(Thread.java:748)

1 个答案:

答案 0 :(得分:1)

您创建索引的方式似乎有误。创建索引时,您需要在请求的mapping中发送body。这是一个有效的例子:

from elasticsearch import Elasticsearch 

es = Elasticsearch(["http://localhost:9200"])
# create index 
index_name = "clus" 
index_mapping = {
   "clus": {
      "mappings": {
         "data": {
            "properties": {
               "content": {
                  "type": "text"
               }
            }
         }
      }
   }
}


if not es.indices.exists(index_name):
        res = es.indices.create(index=index_name, body=index_mapping)
        print res

您应该将此{u'acknowledged': True}作为回复来确认您的索引已创建。

然后使用foreachRDD循环遍历数据dstream,并应用一个函数将数据转换为json结构{"content": str((u'1A5', 1))}并将其索引如下

doc = {"content": str((u'1A5', 1))}
res = es.index(index="clus", doc_type='data', body=doc)

作为旁注,建议不要将数据作为列表(u'1A5', 1)索引,否则您很难在其他上下文中使用它,例如kibana上的可视化。