如何从数据框中选择多个列,其中有些是数组,有些是字符串类型

时间:2019-07-10 09:28:39

标签: scala apache-spark

i have a dataframe with these column. Column 1 and Column2 are array and rest of the columns are string I need to get the value of the both using select.
    +-----------+-------------+--------------------+-----------+--------+
    |Column1    |    COLUMN2  |       NAME         |STATUS     |Sequence|
    +-----------+-------------+--------------------+-----------+--------+
    |      [ABC]|        [ABC]|         BILAL AHMAD|       ID-N|       1|
    |      [ABC]|        [ABC]|        JUNAID Ali  |       ID-N|       1|
    |      [ABC]|        [ABC]|         BILAL ZAFAR|       ID-N|       1|
    |      [ABC]|        [ABC]|      KHALID|       ID-N|       1|
    |      [ABC]|        [ABC]|      KASHIF|       ID-N|       1|
    |      [ABC]|        [ABC]|              SALMAN|       ID-N|       2|
    +-----------+-------------+--------------------+-----------+--------+

我尝试使用下面提到的代码。 df.select($"*")

var seqCols = Seq("NAME","STATUS","sequence")

val allColumnsArr  =  "LEAD_CO_MNE" +: seqCols

df.select(colNames1.map(c=> col(c).getItem(0)):_* )

column1和column2是数组,因此通过使用getItem(0)我将获得数组第1个元素。但这不适用于字符串列。而且字符串列也像某些时候一样是动态的,就像某些时候一样,字符串列名称如“ DATE”,“ AMOUNT”,“ MODE”

+-----------+-------------+--------------------+-----------+--------+
|Column1    |    COLUMN2  |       NAME         |STATUS     |Sequence|
+-----------+-------------+--------------------+-----------+--------+
|      ABC  |        ABC  |         BILAL AHMAD|       ID-N|       1|
|      ABC  |        ABC  |        JUNAID Ali  |       ID-N|       1|
|      ABC  |        ABC  |         BILAL ZAFAR|       ID-N|       1|
|      ABC  |        ABC  |      KHALID|       ID-N|       1|
|      ABC  |        ABC  |      KASHIF|       ID-N|       1|
|      ABC  |        ABC  |              SALMAN|       ID-N|       2|
+-----------+-------------+--------------------+-----------+--------+

"[]"这些括号从column1和Column2中删除,现在我将column1和column2作为字符串

2 个答案:

答案 0 :(得分:0)

您可以展开列数组并像往常一样选择

val df = spark.sparkContext.parallelize(Seq(
  (Array("ABC"), Array("ABC"), "BILAL AHMAD", "ID-N", "1"),
  (Array("ABC"), Array("ABC"), "JUNAID Ali", "ID-N", "1"),
  (Array("ABC"), Array("ABC"), "BILAL ZAFAR", "ID-N", "1")
)).toDF("Column1", "COLUMN2", "NAME", "STATUS", "Sequence")

展开数组类型列,或者您也可以仅从此处的数组中选择第一个

val dfNew = df.schema.foldLeft(df) { (acc, schema) =>
  schema.dataType.typeName match {
    case "array" => acc.withColumn(schema.name, explode(col(schema.name)))
    case _ => acc
  }
}

dfNew.select("*").show(false)

输出:

+-------+-------+-----------+------+--------+
|Column1|COLUMN2|NAME       |STATUS|Sequence|
+-------+-------+-----------+------+--------+
|ABC    |ABC    |BILAL AHMAD|ID-N  |1       |
|ABC    |ABC    |JUNAID Ali |ID-N  |1       |
|ABC    |ABC    |BILAL ZAFAR|ID-N  |1       |
+-------+-------+-----------+------+--------+

答案 1 :(得分:0)

可以检查列类型,以及是否使用数组-获取第一项:

val df = Seq(
  (Array("ABC"), Array("ABC"), "BILAL AHMAD", "ID-N", 1),
  (Array("ABC"), Array("ABC"), "JUNAID Ali", "ID-N", 1)
).toDF("Column1", "COLUMN2", "NAME", "STATUS", "Sequence")

val columnsToSelect = df.schema.map(c => if (c.dataType.isInstanceOf[ArrayType]) col(c.name).getItem(0).alias(c.name) else col(c.name))
df.select(columnsToSelect: _*)

输出:

+-------+-------+-----------+------+--------+
|Column1|COLUMN2|NAME       |STATUS|Sequence|
+-------+-------+-----------+------+--------+
|ABC    |ABC    |BILAL AHMAD|ID-N  |1       |
|ABC    |ABC    |JUNAID Ali |ID-N  |1       |
+-------+-------+-----------+------+--------+