Spark Streaming使用Scala中的foreachRDD()将数据保存到MySQL
拜托,有人可以给我一个关于使用SCALA中的foreachRDD()将Spark Streaming保存到MySQL数据库的功能示例。我有下面的代码,但它没有工作。我只需要一个简单的例子,而不是sintaxis或理论。
谢谢!
package examples
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark._
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.{Seconds, StreamingContext}
import StreamingContext._
import org.apache.hadoop.io.Text
import org.apache.hadoop.io.LongWritable
import org.apache.hadoop.mapred.SequenceFileOutputFormat
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.hive.HiveContext
import java.util.Properties
import org.apache.spark.sql.SaveMode
object StreamingToMysql {
def main(args: Array[String]) {
val sparkConf = new SparkConf().setAppName("NetworkWordCount").setMaster("local[*]")
val sc = new SparkContext(sparkConf)
val sqlContext = new SQLContext(sc)
val hiveCtx= new HiveContext(sc)
import hiveCtx.implicits._
val ssc = new StreamingContext(sc, Seconds(1))
val lines = ssc.socketTextStream("localhost", 9999)
ssc.checkpoint("hdfs://localhost:54310/user/hduser/Streaming/logs")
val rdd = sc.parallelize(List(1))
val df = rdd.toDF()
val split = lines.map(line => line.split(",") )
val input = split.map(x => x(0))
input.foreachRDD { rdd =>
if (rdd.take (1).size == 1) {
rdd.foreachPartition { iterator =>
iterator.foreach {
val connectionProperties = new Properties()
connectionProperties.put("user", "root")
connectionProperties.put("password", "admin123")
iterator.write.mode("append")
.jdbc("jdbc:mysql://192.168.100.8:3306/hadoopguide", "topics", connectionProperties)
}
}
}
}
val connectionProperties = new Properties()
connectionProperties.put("user", "root")
connectionProperties.put("password", "admin123")
df.write.mode("append")
.jdbc("jdbc:mysql://192.168.100.8:3306/hadoopguide", "topics", connectionProperties)
println("Done")
ssc.start()
ssc.awaitTermination()
}
}
答案 0 :(得分:0)
要将Spark Streaming中的数据写入外部系统,您可以使用高级数据帧API或低级RDD。在上面的代码中,两种方法都是混合的,并且可以正常工作。
假设您知道Spark Streaming中传入数据的结构,您可以从每个RDD创建一个Dataframe并使用Dataframe API来保存它:
首先,为数据创建架构:
case class MyStructure(field: Type,....)
然后,将架构应用于传入流:
val structuredData = dstream.map(record => MyStructure(record.field1, ... record.fieldn))
现在使用foreachRDD
将DStream中的每个RDD转换为Dataframe并使用DF API保存它:
// JDBC writer configuration
val connectionProperties = new Properties()
connectionProperties.put("user", "root")
connectionProperties.put("password", "*****")
structuredData.foreachRDD { rdd =>
val df = rdd.toDF() // create a dataframe from the schema RDD
df.write.mode("append")
.jdbc("jdbc:mysql://192.168.100.8:3306/hadoopguide", "topics", connectionProperties)
}