我的数据框如下
val employees = sc.parallelize(Array[(String, Int, BigInt)](
("Rafferty", 31, 222222222), ("Jones", 33, 111111111), ("Heisenberg", 33, 222222222), ("Robinson", 34, 111111111), ("Smith", 34, 333333333), ("Williams", 15, 222222222)
)).toDF("LastName", "DepartmentID", "Code")
employees.show()
+----------+------------+---------+
| LastName|DepartmentID| Code|
+----------+------------+---------+
| Rafferty| 31|222222222|
| Jones| 33|111111111|
|Heisenberg| 33|222222222|
| Robinson| 34|111111111|
| Smith| 34|333333333|
| Williams| 15|222222222|
+----------+------------+---------+
我想创建另一个列作为personal_id作为集中DepartmentId和Code。例如:拉弗蒂=> 31222222222
所以我写如下代码:
val anotherdf = employees.withColumn("personal_id", $"DepartmentID".cast("String") + $"Code".cast("String"))
+----------+------------+---------+------------+
| LastName|DepartmentID| Code| personal_id|
+----------+------------+---------+------------+
| Rafferty| 31|222222222|2.22222253E8|
| Jones| 33|111111111|1.11111144E8|
|Heisenberg| 33|222222222|2.22222255E8|
| Robinson| 34|111111111|1.11111145E8|
| Smith| 34|333333333|3.33333367E8|
| Williams| 15|222222222|2.22222237E8|
+----------+------------+---------+------------+
但是我得到了personal_id两倍。
anotherdf.printSchema
root
|-- LastName: string (nullable = true)
|-- DepartmentID: integer (nullable = false)
|-- Code: decimal(38,0) (nullable = true)
|-- personal_id: double (nullable = true)
答案 0 :(得分:2)
我应该使用concat
import org.apache.spark.sql.functions.concat
val anotherdf2 = employees.withColumn("personal_id", concat($"DepartmentID".cast("String"), $"Code".cast("String")))
+----------+------------+---------+-----------+
| LastName|DepartmentID| Code|personal_id|
+----------+------------+---------+-----------+
| Rafferty| 31|222222222|31222222222|
| Jones| 33|111111111|33111111111|
|Heisenberg| 33|222222222|33222222222|
| Robinson| 34|111111111|34111111111|
| Smith| 34|333333333|34333333333|
| Williams| 15|222222222|15222222222|
+----------+------------+---------+-----------+