Split 1 column into 3 columns in spark scala

时间:2016-08-31 17:47:31

标签: scala apache-spark

I have a dataframe in Spark using scala that has a column that I need split.

scala> test.show
+-------------+
|columnToSplit|
+-------------+
|        a.b.c|
|        d.e.f|
+-------------+

I need this column split out to look like this:

+--------------+
|col1|col2|col3|
|   a|   b|   c|
|   d|   e|   f|
+--------------+

I'm using Spark 2.0.0

Thanks

7 个答案:

答案 0 :(得分:55)

Try:

df.withColumn("_tmp", split($"columnToSplit", "\\.")).select(
  $"_tmp".getItem(0).as("col1"),
  $"_tmp".getItem(1).as("col2"),
  $"_tmp".getItem(2).as("col3")
).drop("_tmp")

答案 1 :(得分:18)

要以编程方式执行此操作,您可以使用(0 until 3).map(i => col("temp").getItem(i).as(s"col$i"))创建一系列表达式(假设您需要3列作为结果),然后使用select语法将其应用于: _*

df.withColumn("temp", split(col("columnToSplit"), "\\.")).select(
    (0 until 3).map(i => col("temp").getItem(i).as(s"col$i")): _*
).show
+----+----+----+
|col0|col1|col2|
+----+----+----+
|   a|   b|   c|
|   d|   e|   f|
+----+----+----+

保留所有列:

df.withColumn("temp", split(col("columnToSplit"), "\\.")).select(
    col("*") +: (0 until 3).map(i => col("temp").getItem(i).as(s"col$i")): _*
).show
+-------------+---------+----+----+----+
|columnToSplit|     temp|col0|col1|col2|
+-------------+---------+----+----+----+
|        a.b.c|[a, b, c]|   a|   b|   c|
|        d.e.f|[d, e, f]|   d|   e|   f|
+-------------+---------+----+----+----+

如果您使用的是pyspark,请使用列表推导替换scala中的map

df = spark.createDataFrame([['a.b.c'], ['d.e.f']], ['columnToSplit'])
from pyspark.sql.functions import col, split

(df.withColumn('temp', split('columnToSplit', '\\.'))
   .select(*(col('temp').getItem(i).alias(f'col{i}') for i in range(3))
).show()
+----+----+----+
|col0|col1|col2|
+----+----+----+
|   a|   b|   c|
|   d|   e|   f|
+----+----+----+

答案 2 :(得分:17)

避免选择部分的解决方案。当您只想附加新列时,这很有用:

case class Message(others: String, text: String)

val r1 = Message("foo1", "a.b.c")
val r2 = Message("foo2", "d.e.f")

val records = Seq(r1, r2)
val df = spark.createDataFrame(records)

df.withColumn("col1", split(col("text"), "\\.").getItem(0))
  .withColumn("col2", split(col("text"), "\\.").getItem(1))
  .withColumn("col3", split(col("text"), "\\.").getItem(2))
  .show(false)

+------+-----+----+----+----+
|others|text |col1|col2|col3|
+------+-----+----+----+----+
|foo1  |a.b.c|a   |b   |c   |
|foo2  |d.e.f|d   |e   |f   |
+------+-----+----+----+----+

更新:强烈建议您使用Psidom's implementation以避免分裂三次。

答案 3 :(得分:5)

这会将列附加到原始DataFrame并且不使用select,并且只使用临时列拆分一次:

import spark.implicits._

df.withColumn("_tmp", split($"columnToSplit", "\\."))
  .withColumn("col1", $"_tmp".getItem(0))
  .withColumn("col2", $"_tmp".getItem(1))
  .withColumn("col3", $"_tmp".getItem(2))
  .drop("_tmp")

答案 4 :(得分:1)

这扩展了Psidom的答案,并显示了如何动态进行拆分,而无需硬编码列数。此答案将运行查询以计算列数。

val df = Seq(
  "a.b.c",
  "d.e.f"
).toDF("my_str")
.withColumn("letters", split(col("my_str"), "\\."))

val numCols = df
  .withColumn("letters_size", size($"letters"))
  .agg(max($"letters_size"))
  .head()
  .getInt(0)

df
  .select(
    (0 until numCols).map(i => $"letters".getItem(i).as(s"col$i")): _*
  )
  .show()

答案 5 :(得分:1)

我们可以在Scala中使用with和yield编写for:-

如果您的列数超过了,只需将其添加到所需的列并进行播放。 :)

val aDF = Seq("Deepak.Singh.Delhi").toDF("name")
val desiredColumn = Seq("name","Lname","City")
val colsize = desiredColumn.size

val columList = for (i <- 0 until colsize) yield split(col("name"),".").getItem(i).alias(desiredColumn(i))

aDF.select(columList: _ *).show(false) 

输出:-

+------+------+-----+--+
|name  |Lname |city |
+-----+------+-----+---+
|Deepak|Singh |Delhi|
+---+------+-----+-----+

如果您不需要名称列,则删除该列并仅使用withColumn。

答案 6 :(得分:0)

示例: 不使用 select 语句。

假设我们有一个包含一组列的数据框,并且我们想要拆分一个列名为 name

的列
import spark.implicits._

val columns = Seq("name","age","address")

val data = Seq(("Amit.Mehta", 25, "1 Main st, Newark, NJ, 92537"),
             ("Rituraj.Mehta", 28,"3456 Walnut st, Newark, NJ, 94732"))

var dfFromData = spark.createDataFrame(data).toDF(columns:_*)
dfFromData.printSchema()

val newDF = dfFromData.map(f=>{
val nameSplit = f.getAs[String](0).split("\\.").map(_.trim)
      (nameSplit(0),nameSplit(1),f.getAs[Int](1),f.getAs[String](2))
    })

val finalDF = newDF.toDF("First Name","Last Name", "Age","Address")

finalDF.printSchema()

finalDF.show(false)

输出: output