如何将Dstream数据(json)存储到cassandra中?

时间:2017-05-16 11:10:08

标签: json spark-streaming kafka-consumer-api spark-cassandra-connector

       val topics= "test"
       val zkQuorum="localhost:2181"
       val group="test-consumer-group"    
       val sparkConf = new org.apache.spark.SparkConf()
          .setAppName("XXXXX")
          .setMaster("local[*]")
          .set("cassandra.connection.host", "127.0.0.1")
          .set("cassandra.connection.port", "9042")

        val ssc = new StreamingContext(sparkConf, Seconds(2))
        ssc.checkpoint("checkpoint")
        val topicMap = topics.split(",").map((_, numThreads.toInt)).toMap

        val lines = KafkaUtils.createStream(ssc, zkQuorum, group, topicMap).map(_._2)

我正在像这样获得DStream(json)

[{"id":100,"firstName":"Beulah","lastName":"Fleming","gender":"female","ethnicity":"SpEd","height":167,"address":27,"createdDate":1494489672243,"lastUpdatedDate":1494489672244,"isDeleted":0},{"id":101,"firstName":"Traci","lastName":"Summers","gender":"female","ethnicity":"Frp","height":181,"address":544,"createdDate":1494510639611,"lastUpdatedDate":1494510639611,"isDeleted":0}]

通过以上程序我在DStream中获取json数据。 我将如何处理这个Dstream数据并存储到Cassandra或弹性搜索中?那么我将如何从DStream(以json格式)检索数据并存储在Cassandra中?

1 个答案:

答案 0 :(得分:0)

您需要导入com.datastax.spark.connector._,将流的元素转换为适当的案例类

case class Record(id: String, firstName: String, ...)
val colums = SomeColums("id", "first_name", ...)
val mapped = lines.map(whateverDataYouHave => fuctionThatReutrnsARecordObject)

并使用隐式函数saveToCassandra

保存它
mapped.saveToCassandra(KEYSPACE_NAME, TABLE_NAME, columns)

有关详细信息,请查看文档https://github.com/datastax/spark-cassandra-connector/blob/master/doc/5_saving.md