Spark Dataframe writeStream forEach不写所有行

时间:2019-05-09 22:01:24

标签: apache-spark apache-kafka spark-streaming spark-structured-streaming

我的数据来源是Kafka,我使用以下方式从kafka中读取数据:

var df = spark
    .readStream
    .format("kafka")
    .option("kafka.bootstrap.servers", "localhost:9092,localhost:9093,localhost:9094")
    .option("subscribe", "raw_weather")
    .load()

df = df.selectExpr("CAST(value as STRING)")
        .as[String]
        .select("value")

收到的value如下:(725030:14732,2008,12,31,11,0.6,-6.7,1001.7,80,6.2,8,0.0,0.0)。传递给Kafka的行数为8784(24 * 366)。

我正在尝试使用扩展org.apache.spark.sql.ForeachWriter[org.apache.spark.sql.Row]的类在DB2数据库中流式传输这些数据。这是我尝试写入数据的方法:

def writeToDb2(spark: SparkSession, df: DataFrame): Unit = {
    val writer = new JDBCSink(url , user , password)

    val query= df.writeStream
        .foreach(writer)
        .outputMode("append")
        .trigger(Trigger.ProcessingTime(2000))
        .start()

    query.awaitTermination()
}

这是我的JDBCSink的样子:

class JDBCSink(url: String, user:String, pwd:String) extends org.apache.spark.sql.ForeachWriter[org.apache.spark.sql.Row]{
    val driver = "com.ibm.db2.jcc.DB2Driver"
    var connection:java.sql.Connection = _
    var statement:java.sql.Statement = _

    val schema = "SPARK"
    val rawTableName = "RAW_WEATHER_DATA"
    val dailyPrecipitationTable = "DAILY_PRECIPITATION_TABLE"

    def open(partitionId: Long, version: Long):Boolean = {
        Class.forName(driver)
        connection = java.sql.DriverManager.getConnection(url, user, pwd)
        statement = connection.createStatement
        true
    }

    def process(valz: org.apache.spark.sql.Row): Unit = {
        val value = valz(0).toString.split(",")
        val stmt = s"INSERT INTO $schema.$rawTableName(wsid, year, month, day, hour, temperature, dewpoint, pressure, wind_direction, wind_speed, sky_condition, one_hour_precip, six_hour_precip) " +
            "VALUES (" +
            "'" + value(0) + "'," +
            value(1) + "," +
            value(2) + "," +
            value(3) + "," +
            value(4) + "," +
            value(5) + "," +
            value(6) + "," +
            value(7) + "," +
            value(8) + "," +
            value(9) + "," +
            value(10) + "," +
            value(11) + "," +
            value(12) + ")"
        println(value(1) + "," + value(2) + "," + value(3) + "," + value(4) + "," + value(11))

        statement.executeUpdate(stmt)
    }

    def close(errorOrNull:Throwable):Unit = {
        connection.close()
    }
}

在这里,当我将数据发送到流时,spark不会读取所有行。一旦查看了程序尝试编写的代码,这一点就变得很清楚。当我在表中执行COUNT (*)时,它不会在表中写入所有8784行。在程序的某些迭代中,写入的行数徘徊在7000左右,然后有时是7900等。它不会写入所有行。

其背后的原因可能是什么?我遵循了结构化流准则。而且,我也尝试过使用其他各种触发器来运行,但是都没有帮助。

0 个答案:

没有答案