来自Kafka的Spark Streaming并与Memsql的记录进行比较(计数不合适)

时间:2017-06-05 06:30:36

标签: scala apache-spark apache-kafka spark-streaming memsql

我们正在从Kafka获取记录,我们从Kafka中获取Cardnumber的Spark流,并从Memsql记录中执行Kafka卡片编号比较,并通过分组cardnumber选择计数和卡号。但是计数在Spark Streaming中没有以适当的方式出现

例如,当我们执行查询时,在Memsql中计数它在memsql命令提示符下给出以下输出

memsql> select card_number,count(*) from cardnumberalert5 where 
inserted_time <= now() and inserted_time >= NOW() - INTERVAL 10 MINUTE group 
by card_number;
+------------------+----------+
| card_number      | count(*) |
+------------------+----------+
| 4556655960290527 |        2 |
| 6011255715328120 |        4 |
| 4532133676538232 |        2 |
| 6011614607071620 |        2 |
| 4024007117099605 |        2 |
| 347138718258304  |        4 |
+------------------+----------+

我们注意到Spark Streaming中的计数正在分发

例如,当我们从memsql命令提示符执行时,Memsql输出

+------------------+----------+
| card_number      | count(*) |
+------------------+----------+
| 4556655960290527 |        2 |

当在Spark Streaming中执行相同的sql时,它会将输出打印为

RECORDS FOUNDS ****************************************
CARDNUMBER KAFKA ############### 4024007117099605
CARDNUMBER MEMSQL ############### 4556655960290527
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 4556655960290527
COUNT MEMSQL ############### 1

这里的计数显示为2,但我们得到了2张记录,其中记数为1,

在Spark Streaming中打印输出

RECORDS FOUNDS ****************************************
CARDNUMBER KAFKA ############### 4024007117099605
CARDNUMBER MEMSQL ############### 4556655960290527
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 6011255715328120
COUNT MEMSQL ############### 2
CARDNUMBER MEMSQL ############### 4532133676538232
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 6011614607071620
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 4024007117099605
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 347138718258304
COUNT MEMSQL ############### 2
CARDNUMBER MEMSQL ############### 4556655960290527
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 6011255715328120
COUNT MEMSQL ############### 2
CARDNUMBER MEMSQL ############### 4532133676538232
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 6011614607071620
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 4024007117099605
COUNT MEMSQL ############### 1
CARDNUMBER MEMSQL ############### 347138718258304
COUNT MEMSQL ############### 2

Spark Streaming Program

class SparkKafkaConsumer11(val ssc : StreamingContext,val sc : SparkContext,val spark : org.apache.spark.sql.SparkSession, val topics : Array[String], val kafkaParam : scala.collection.immutable.Map[String,Object]) {

 val stream = KafkaUtils.createDirectStream[String, String](
            ssc,
            PreferConsistent,
            Subscribe[String, String](topics, kafkaParam)
          )

  val recordStream = stream.map(record => (record.value)) // Take the value only from the key,value pair for processing

   recordStream.foreachRDD{rdd =>

val brokers = "174.24.154.244:9092" // Specify the BROKER
val props = new HashMap[String, Object]()
    props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, brokers)
    props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer")
    props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer")
    props.put(ProducerConfig.CLIENT_ID_CONFIG,"SparkKafkaConsumer__11")
val producer = new KafkaProducer[String,String](props)

val result = spark.read
            .format("com.memsql.spark.connector")
            .options(Map("query" -> ("select card_number,count(*) from cardnumberalert5 where inserted_time <= now() and inserted_time >= NOW() - INTERVAL 10 MINUTE group by card_number"),"database" -> "fraud"))
            .load()

val record = rdd.map(line => line.split("\\|")) //Split the record and create a array of it.

 record.collect().foreach{recordRDD =>
    val now1 = System.currentTimeMillis

    val now = new java.sql.Timestamp(now1)
    val cardnumber_kafka = recordRDD(13).toString
    val sessionID = recordRDD(1).toString
    println("RECORDS FOUNDS ****************************************")
    println("CARDNUMBER KAFKA ############### "+cardnumber_kafka)

    result.collect().foreach{t => 

      val resm1 = t.getAs[String]("card_number")
      println("CARDNUMBER MEMSQL ############### "+resm1)
      val resm2 = t.getAs[Long]("count(*)")
      println("COUNT MEMSQL ############### "+resm2)

      if(resm1.equals(cardnumber_kafka)){
        if(resm2 > 2){
          println("INSIDE IF CONDITION FOR MORE THAN 3 COUNT"+now)
          val messageToKafka = "---- THIRD OR MORE OCCURANCE  ---- "+cardnumber_kafka
          val message=new ProducerRecord[String, String]("output1",0,sessionID,messageToKafka)
          try {
            producer.send(message)

          } catch {
              case e: Exception =>
              e.printStackTrace
              System.exit(1)
          }
        }
      }

    }

}


producer.close()

}

}

不确定如何解决,任何建议或帮助都得到高度赞赏

提前致谢

1 个答案:

答案 0 :(得分:0)

我们可以通过在Spark Configuration中设置以下属性来解决此问题。

<强>代码:

<$>