我使用reducebyKey从rdd创建了以下数据框。我想将第一列(最初是键)拆分为2个新的列,用逗号分隔。
scala> result_winr_table.schema
res10: org.apache.spark.sql.types.StructType = StructType(StructField(_1,StructType(StructField(_1,IntegerType,false), StructField(_2,IntegerType,false)),true), StructField(_2,DoubleType,false))
scala> result_winr_table
res5: org.apache.spark.sql.DataFrame = [_1: struct<_1:int,_2:int>, _2: double]
scala> result_winr_table.show
+--------+-------------------+
| _1| _2|
+--------+-------------------+
| [31,88]| 0.475|
| [18,91]| 0.5833333333333334|
| [56,95]|0.37142857142857144|
| [70,61]| 0.6266666666666667|
|[104,11]| 0.4527911784975879|
| [42,58]| 0.6857142857142857|
| [13,82]| 0.3333333333333333|
| [30,18]|0.49310344827586206|
| [99,18]|0.44285714285714284|
| [53,31]| 0.2981366459627329|
| [52,84]| 0.4444444444444444|
| [60,38]| 0.38|
| [79,9]|0.36666666666666664|
| [20,85]| 0.4389312977099237|
| [61,87]| 0.4807692307692308|
| [3,67]| 0.4245810055865922|
| [62,84]|0.47796610169491527|
| [9,32]| 0.4727272727272727|
| [94,44]| 0.5698324022346368|
| [50,67]|0.45083487940630795|
+--------+-------------------+
我尝试直接在列上使用split方法但由于类型不匹配而无法正常工作。
实现这一目标的最佳方式是什么?
答案 0 :(得分:5)
鉴于schema
是
root
|-- _1: struct (nullable = true)
| |-- _1: integer (nullable = false)
| |-- _2: integer (nullable = false)
|-- _2: double (nullable = false)
您可以使用withColumn
api,如下所示
result_winr_table.withColumn("first", $"_1._1")
.withColumn("second", $"_1._2")
如果您不想要原始列,则可以使用.drop("_1")
答案 1 :(得分:1)
如果您有一个复杂的结构,在编译时不知道其属性名称,则可以执行以下操作:
case class Foo(a: Int, b: String, c: Boolean)
val df = Seq( (1, Foo(2, "three", false)), (2, Foo(4, "five", true)) ).toDF("id", "foo")
df.show
+---+-----------------+
| id| foo|
+---+-----------------+
| 1|[2, three, false]|
| 2| [4, five, true]|
+---+-----------------+
df.select($"*", $"foo.*").show
+---+-----------------+---+-----+-----+
| id| foo| a| b| c|
+---+-----------------+---+-----+-----+
| 1|[2, three, false]| 2|three|false|
| 2| [4, five, true]| 4| five| true|
+---+-----------------+---+-----+-----+
答案 2 :(得分:0)
像往常一样,希望使用Spark sql解决此问题,下面的sql查询将适用于将Spark 1.6+中的数据框/表展平:
sqlContext.sql(s""" select _1["_1"] as col1, _1["_2"] as col2, _2 as col3 from result_winr_table """)