使用Scala,从Java ResultSet创建DataFrame或RDD

时间:2018-06-13 17:11:48

标签: scala apache-spark dataframe resultset

我不想直接使用spark.read方法创建Dataframe或RDD。我想从java结果集形成一个数据帧或RDD(有5,000,00条记录)。如果您提供勤奋的解决方案,请感谢您。

1 个答案:

答案 0 :(得分:0)

首先使用RowFactory,我们可以创建行。其次,可以使用SQLContext.createDataFrame方法将所有行转换为Dataframe。希望对您有帮助:)。

import java.sql.Connection
import java.sql.ResultSet
import org.apache.spark.sql.RowFactory
import org.apache.spark.sql.DataFrame
import org.apache.spark.sql.Row
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.types.StringType
import org.apache.spark.sql.types.StructField
import org.apache.spark.sql.types.StructType
var resultSet: ResultSet = null
val rowList = new scala.collection.mutable.MutableList[Row]
var cRow: Row = null
//Resultset is created from traditional Java JDBC.
val resultSet = DbConnection.createStatement().execute("Sql")

//Looping resultset
while (resultSet.next()) {
   //adding two columns into a "Row" object
   cRow = RowFactory.create(resultSet.getObject(1), resultSet.getObject(2))
   //adding each rows into "List" object.
   rowList += (cRow)
}

val sconf = new SparkConf
sconf.setAppName("")
sconf.setMaster("local[*]")
var sContext: SparkContext = new SparkContext(sConf)
var sqlContext: SQLContext = new SQLContext(sContext)

//creates a dataframe
DF = sqlContext.createDataFrame(sContext.parallelize(rowList ,2), getSchema())
DF.show() //show the dataframe.
def getSchema(): StructType = {
    val DecimalType = DataTypes.createDecimalType(38, 10)
    val schema = StructType(
      StructField("COUNT", LongType, false) ::
        StructField("TABLE_NAME", StringType, false) :: Nil)

  //Returning the schema to define dataframe columns.
  schema
}