在Spark Scala中动态创建数据框

时间:2019-08-26 16:52:04

标签: scala apache-spark hadoop hdfs dynamic-programming

我从循环中(来自不同行)的数据帧1出来的列数据很少。我想用所有这些不同的行/列数据创建一个Dataframe 2。

下面是示例数据,我尝试使用Seq:

var DF1 = Seq(
  ("11111111", "0101","6573","X1234",12763),
  ("44444444", "0148","8382","Y5678",-2883),
  ("55555555", "0154","5240","Z9011", 8003))

我想在上面的Seq下方添加2条动态行,然后使用最终的Seq创建一个Dataframe。

  ("88888888", "1333","7020","DEF34",500)
  ("99999999", "1333","7020","GHI56",500)

最终序列或数据框应如下所示:

   var DF3 = Seq(
      ("11111111", "0101","6573","X1234",12763),
      ("44444444", "0148","8382","Y5678",-2883),
      ("55555555", "0154","5240","Z9011", 8003),
      ("88888888", "1333","7020","DEF34",500),
      ("99999999", "1333","7020","GHI56",500))

在下面的代码中尝试使用Seq创建了Case类,以尽可能使用它。问题是,当将新行添加到Seq时,它将返回添加了新行的新Seq。如何获取添加了新行的更新后的Seq?如果不是Seq,则使用ArrayBuffer是个好主意吗?

  case class CreateDFTestCaseClass(ACCOUNT_NO: String, LONG_IND: String, SHORT_IND: String,SECURITY_ID: String, QUANTITY: Integer)
  val sparkSession = SparkSession
    .builder()
    .appName("AllocationOneViewTest")
    .master("local")
    .getOrCreate()
  val sc = sparkSession.sparkContext
  import sparkSession.sqlContext.implicits._
  def main(args: Array[String]): Unit = {
    var acctRulesPosDF = Seq(
      ("11111111", "0101","6573","X1234",12763),
      ("44444444", "0148","8382","Y5678",-2883),
      ("55555555", "0154","5240","Z9011", 8003))
    acctRulesPosDF:+ ("88888888", "1333","7020","DEF34",500)
    acctRulesPosDF:+ ("99999999", "1333","7020","GHI56",500))
    var DF3 = acctRulesPosDF.toDF
    DF3.show()

2 个答案:

答案 0 :(得分:0)

这不是最优雅的方法,但是要使代码尽可能与原始代码相似,您只需要将结果分配回变量即可。

 var acctRulesPosDF = Seq(
      ("11111111", "0101","6573","X1234",12763),
      ("44444444", "0148","8382","Y5678",-2883),
      ("55555555", "0154","5240","Z9011", 8003))
    acctRulesPosDF = acctRulesPosDF:+ ("88888888", "1333","7020","DEF34",500)
    acctRulesPosDF = acctRulesPosDF:+ ("99999999", "1333","7020","GHI56",500)

火花壳中的快速示例

scala>  var acctRulesPosDF = Seq(
     |       ("11111111", "0101","6573","X1234",12763),
     |       ("44444444", "0148","8382","Y5678",-2883),
     |       ("55555555", "0154","5240","Z9011", 8003))
acctRulesPosDF: Seq[(String, String, String, String, Int)] = List((11111111,0101,6573,X1234,12763), (44444444,0148,8382,Y5678,-2883), (55555555,0154,5240,Z9011,8003))

scala>     acctRulesPosDF = acctRulesPosDF:+ ("88888888", "1333","7020","DEF34",500)
acctRulesPosDF: Seq[(String, String, String, String, Int)] = List((11111111,0101,6573,X1234,12763), (44444444,0148,8382,Y5678,-2883), (55555555,0154,5240,Z9011,8003), (88888888,1333,7020,DEF34,500))

scala>     acctRulesPosDF = acctRulesPosDF:+ ("99999999", "1333","7020","GHI56",500)
acctRulesPosDF: Seq[(String, String, String, String, Int)] = List((11111111,0101,6573,X1234,12763), (44444444,0148,8382,Y5678,-2883), (55555555,0154,5240,Z9011,8003), (88888888,1333,7020,DEF34,500), (99999999,1333,7020,GHI56,500))

scala> var DF3 = acctRulesPosDF.toDF
DF3: org.apache.spark.sql.DataFrame = [_1: string, _2: string ... 3 more fields]

scala>     DF3.show()
+--------+----+----+-----+-----+
|      _1|  _2|  _3|   _4|   _5|
+--------+----+----+-----+-----+
|11111111|0101|6573|X1234|12763|
|44444444|0148|8382|Y5678|-2883|
|55555555|0154|5240|Z9011| 8003|
|88888888|1333|7020|DEF34|  500|
|99999999|1333|7020|GHI56|  500|
+--------+----+----+-----+-----+

答案 1 :(得分:0)

即使追加新行也获得相同的旧Seq的原因是,除非单独指定要导入可变Seq,否则默认情况下导入的Seq的类型为scala.collection.immutable.Seq(不会更改)。 enter code here使用scala.collection.mutable.Seq。因此,要么通过在scala中显式设置import来使用可变Seq,要么按照@SCouto在其他答案中的建议进行操作。