如何使用火花流处理实时流数据/日志?

时间:2016-04-19 06:28:35

标签: apache-spark apache-spark-sql spark-streaming spark-dataframe

我是Spark和Scala的新手。

我想实现一个实时Spark消费者,它可以每分钟读取网络日志[从Kafka Publisher获取大约1GB的JSON日志行/分钟],最后将聚合值存储在ElasticSearch中。

聚合基于少量值[如bytes_in,bytes_out等]使用复合键[如:客户端MAC,客户端IP,服务器MAC,服务器IP等]。

我写的Spark Consumer是:

object LogsAnalyzerScalaCS{
    def main(args : Array[String]) {
          val sparkConf = new SparkConf().setAppName("LOGS-AGGREGATION")
          sparkConf.set("es.nodes", "my ip address")
          sparkConf.set("es.port", "9200")
          sparkConf.set("es.index.auto.create", "true")
          sparkConf.set("es.nodes.discovery", "false")

          val elasticResource = "conrec_1min/1minute"
          val ssc = new StreamingContext(sparkConf, Seconds(30))
          val zkQuorum = "my zk quorum IPs:2181"
          val consumerGroupId = "LogsConsumer"
          val topics = "Logs"
          val topicMap = topics.split(",").map((_,3)).toMap
          val json = KafkaUtils.createStream(ssc, zkQuorum, consumerGroupId, topicMap)
          val logJSON = json.map(_._2)
          try{
            logJSON.foreachRDD( rdd =>{
              if(!rdd.isEmpty()){
                  val sqlContext = SQLContextSingleton.getInstance(rdd.sparkContext)
                  import sqlContext.implicits._
                  val df = sqlContext.read.json(rdd)
                  val groupedData = 
((df.groupBy("id","start_time_formated","l2_c","l3_c",
"l4_c","l2_s","l3_s","l4_s")).agg(count("f_id") as "total_f", sum("p_out") as "total_p_out",sum("p_in") as "total_p_in",sum("b_out") as "total_b_out",sum("b_in") as "total_b_in", sum("duration") as "total_duration"))
                  val dataForES = groupedData.withColumnRenamed("start_time_formated", "start_time")
                  dataForES.saveToEs(elasticResource)
                  dataForES.show();
                }
              })
             }
          catch{
            case e: Exception => print("Exception has occurred : "+e.getMessage)
          }
          ssc.start()
          ssc.awaitTermination()
        }

object SQLContextSingleton {
    @transient  private var instance: org.apache.spark.sql.SQLContext = _
    def getInstance(sparkContext: SparkContext): org.apache.spark.sql.SQLContext = {
      if (instance == null) {
        instance = new org.apache.spark.sql.SQLContext(sparkContext)
      }
      instance
    }
  }
}

首先,我想知道我的方法是否正确[考虑到我需要1分钟的日志聚合]?

使用此代码似乎存在问题:

  1. 此消费者将每隔30秒从Kafka经纪商处提取数据 并将最终聚合保存到Elasticsearch为30 sec数据,因此增加了Elasticsearch中的行数 唯一键[每分钟至少2个条目]。 UI工具[ 让我们说Kibana]需要做进一步的聚合。如果我增加了 轮询时间从30秒到60秒然后需要很长时间 聚合,因此根本不是实时的。
  2. 我想以这样的方式实现它,即在ElasticSearch中只有一个 每个键的行应该保存。因此我想进行聚合 直到我没有在我的数据集中获取新的键值时 从Kafka经纪人那里获得[每分钟]。做完之后 一些谷歌搜索我发现这可以使用 groupByKey()和updateStateByKey()函数,但我无法 弄清楚如何在我的情况下使用它[我应该转换JSON 将行记录到具有平值的日志行字符串中,然后使用 这些功能在那里]?如果我将使用这些功能,那么 我应该将最终的聚合值保存到ElasticSearch吗?
  3. 还有其他方法可以实现吗?
  4. 我们将非常感谢您的快速帮助。

    此致 布佩希

1 个答案:

答案 0 :(得分:0)

import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.SQLContext
import org.apache.spark.streaming.kafka010._
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.streaming.{Seconds, StreamingContext}

object Main {
def main(args: Array[String]): Unit = {


val conf = new SparkConf().setAppName("KafkaWordCount").setMaster("local[*]")
val ssc = new StreamingContext(conf, Seconds(15))

val kafkaParams = Map[String, Object](
  "bootstrap.servers" -> "localhost:9092",
  "key.deserializer" -> classOf[StringDeserializer],
  "value.deserializer" -> classOf[StringDeserializer],
  "group.id" -> "group1",
  "auto.offset.reset" -> "earliest",
  "enable.auto.commit" -> (false: java.lang.Boolean)
)//,localhost:9094,localhost:9095"

val topics = Array("test")
val stream = KafkaUtils.createDirectStream[String, String](
  ssc,
  PreferConsistent,
  Subscribe[String, String](topics, kafkaParams)
)

val out = stream.map(record =>
  record.value
)

val words = out.flatMap(_.split(" "))
val count = words.map(word => (word, 1))
val wdc = count.reduceByKey(_+_)

val sqlContext = SQLContext.getOrCreate(SparkContext.getOrCreate())

wdc.foreachRDD{rdd=>
        val es = sqlContext.createDataFrame(rdd).toDF("word","count")
        import org.elasticsearch.spark.sql._
        es.saveToEs("wordcount/testing")
  es.show()
}

ssc.start()
ssc.awaitTermination()

 }
}

To see full example and sbt