Cassandra,Spark,Elasticsearch:用于kibana可视化的流数据

时间:2015-04-20 08:25:57

标签: elasticsearch cassandra streaming apache-spark

我正在尝试将Kibana中的火花数据可视化。但是,使用以下命令创建RRD:

    val test = sc.cassandraTable("test","data")

然后我使用Elasticsearch和Hadoop库通过以下方式流式传输到Elasticsearch:

    EsSpark.saveToEs(test, "spark/docs", Map("es.nodes" -> "192.168.1.88"))

但是我收到了这个错误:

15/04/20 16:15:27 ERROR TaskSetManager: Task 0 in stage 12.0 failed 4 times; aborting job
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 12.0 failed 4 times, most recent failure: Lost task 0.3 in stage 12.0 (TID 36, 192.168.1.92): org.elasticsearch.hadoop.serialization.EsHadoopSerializationException: Cannot handle type [class com.datastax.spark.connector.CassandraRow]

有人可以指导我从Spark到Elasticsearch的流式传输。有没有更好的方法可视化来自cassandra,solr或spark的数据。我遇到了香蕉,但它似乎没有选择发布dashabords。

由于

1 个答案:

答案 0 :(得分:1)

根据Spark Cassandra Connector Guide,您可以先定义一个案例类,然后将CassandraRow转换为案例类对象,然后将对象保存到Elasticsearch。以下是指南中的示例代码:

case class WordCount(w: String, c: Int)

object WordCount { 
    implicit object Mapper extends DefaultColumnMapper[WordCount](
        Map("w" -> "word", "c" -> "count")) 
}

sc.cassandraTable[WordCount]("test", "words").toArray
// Array(WordCount(bar,20), WordCount(foo,10))

sc.parallelize(Seq(WordCount("baz", 30), WordCount("foobar", 40)))
  .saveToCassandra("test", "words", SomeColumns("word", "count"))