插入Mysql数据库后更新数据集数据

时间:2017-10-13 05:02:29

标签: scala apache-spark jdbc streaming spark-streaming

我有一个小方案,我读取文本文件并根据日期计算平均值,并将摘要存储到Mysql数据库中。

以下是代码

val repo_sum = joined_data.map(SensorReport.generateReport)
          repo_sum.show() --- STEP 1
          repo_sum.write.mode(SaveMode.Overwrite).jdbc(url, "sensor_report", prop)
          repo_sum.show() --- STEP 2

在repo_sum数据帧中计算平均值之后是步骤1的结果

+----------+------------------+-----+-----+
|      date|               flo|   hz|count|
+----------+------------------+-----+-----+
|2017-10-05|52.887049194476745|10.27|  5.0|
|2017-10-04|  55.4188048943416|10.27|  5.0|
|2017-10-03|  54.1529270444092|10.27| 10.0|
+----------+------------------+-----+-----+

然后执行save命令,第2步的数据集值为

+----------+-----------------+------------------+-----+
|      date|              flo|                hz|count|
+----------+-----------------+------------------+-----+
|2017-10-05|52.88704919447673|31.578524597238367| 10.0|
|2017-10-04| 55.4188048943416| 32.84440244717079| 10.0|
+----------+-----------------+------------------+-----+

以下是完整的代码

class StreamRead extends Serializable {
  org.apache.spark.sql.catalyst.encoders.OuterScopes.addOuterScope(this);
  def main(args: Array[String]) {
    val conf = new SparkConf().setAppName("Application").setMaster("local[2]")
    val ssc = new StreamingContext(conf, Seconds(2))
    val sqlContext = new SQLContext(ssc.sparkContext)
    import sqlContext.implicits._
    val sensorDStream = ssc.textFileStream("file:///C:/Users/M1026352/Desktop/Spark/StreamData").map(Sensor.parseSensor)
    val url = "jdbc:mysql://localhost:3306/streamdata"
    val prop = new java.util.Properties
    prop.setProperty("user", "root")
    prop.setProperty("password", "root")
    val tweets = sensorDStream.foreachRDD {
      rdd =>
        if (rdd.count() != 0) {
          val databaseVal = sqlContext.read.jdbc("jdbc:mysql://localhost:3306/streamdata", "sensor_report", prop)
          val rdd_group = rdd.groupBy { x => x.date }
          val repo_data = rdd_group.map { x =>
            val sum_flo = x._2.map { x => x.flo }.reduce(_ + _)
            val sum_hz = x._2.map { x => x.hz }.reduce(_ + _)
            val sum_flo_count = x._2.size
            print(sum_flo_count)
            SensorReport(x._1, sum_flo, sum_hz, sum_flo_count)
          }
          val df = repo_data.toDF()
          val joined_data = df.join(databaseVal, Seq("date"), "fullouter")
          joined_data.show()
          val repo_sum = joined_data.map(SensorReport.generateReport)
          repo_sum.show()
          repo_sum.write.mode(SaveMode.Overwrite).jdbc(url, "sensor_report", prop)
          repo_sum.show()
        }
    }

    ssc.start()
    WorkerAndTaskExample.main(args)
    ssc.awaitTermination()
  }
  case class Sensor(resid: String, date: String, time: String, hz: Double, disp: Double, flo: Double, sedPPM: Double, psi: Double, chlPPM: Double)

  object Sensor extends Serializable {
    def parseSensor(str: String): Sensor = {
      val p = str.split(",")
      Sensor(p(0), p(1), p(2), p(3).toDouble, p(4).toDouble, p(5).toDouble, p(6).toDouble, p(7).toDouble, p(8).toDouble)
    }
  }
  case class SensorReport(date: String, flo: Double, hz: Double, count: Double)
  object SensorReport extends Serializable {
    def generateReport(row: Row): SensorReport = {
      print(row)
      if (row.get(4) == null) {
        SensorReport(row.getString(0), row.getDouble(1) / row.getDouble(3), row.getDouble(2) / row.getDouble(3), row.getDouble(3))
      } else if (row.get(2) == null) {
        SensorReport(row.getString(0), row.getDouble(4), row.getDouble(5), row.getDouble(6))
      } else {
        val count = row.getDouble(3) + row.getDouble(6)
        val flow_avg_update = (row.getDouble(6) * row.getDouble(4) + row.getDouble(1)) / count
        val flow_flo_update = (row.getDouble(6) * row.getDouble(5) + row.getDouble(1)) / count
        print(count + " : " + flow_avg_update + " : " + flow_flo_update)
        SensorReport(row.getString(0), flow_avg_update, flow_flo_update, count)
      }
    }
  }

据我所知,保存命令在spark中执行时整个过程再次运行,我的理解是正确的请告诉我。

1 个答案:

答案 0 :(得分:1)

在Spark中,所有转换都是惰性的,在调用action之前不会发生任何事情。同时,这意味着如果在同一RDD或数据帧上调用多个动作,则将多次执行所有计算。这包括加载数据和所有转换。

要避免这种情况,请使用cache()persist()(除了cache()可以指定不同类型的存储空间外,同样的事情,默认情况下只有RAM内存)。 cache()将在第一次使用操作后将RDD / dataframe保留在内存中。因此,避免多次运行相同的转换。

在这种情况下,由于对数据帧执行了两个操作导致了这种意外行为,因此缓存数据帧可以解决问题:

val repo_sum = joined_data.map(SensorReport.generateReport).cache()