将数据框的“字符串”列转换为Array [Int]

时间:2018-12-06 04:34:50

标签: scala apache-spark dataframe functional-programming

我是Scala和Spark的新手,我试图在本地读取一个csv文件(以进行测试):

val spark = org.apache.spark.sql.SparkSession.builder.master("local").appName("Spark CSV Reader").getOrCreate;
val topics_df = spark.read.format("csv").option("header", "true").load("path-to-file.csv")
topics_df.show(10)

文件的外观如下:

+-----+--------------------+--------------------+
|topic|         termindices|         termweights|
+-----+--------------------+--------------------+
|   15|[21,31,51,108,101...|[0.0987100701,0.0...|
|   16|[42,25,121,132,55...|[0.0405490884,0.0...|
|    7|[1,23,38,7,63,0,1...|[0.1793091892,0.0...|
|    8|[13,40,35,104,153...|[0.0737646511,0.0...|
|    9|[2,10,93,9,158,18...|[0.1639456608,0.1...|
|    0|[28,39,71,46,123,...|[0.0867449145,0.0...|
|    1|[11,34,36,110,112...|[0.0729913664,0.0...|
|   17|[6,4,14,82,157,61...|[0.1583892199,0.1...|
|   18|[9,27,74,103,166,...|[0.0633899386,0.0...|
|   19|[15,81,289,218,34...|[0.1348582482,0.0...|
+-----+--------------------+--------------------+

ReadSchema: struct<topic:string,termindices:string,termweights:string>

termindices列的类型应该为Array[Int],但是当保存为CSV时它是String(如果从数据库中提取,通常不会有问题)。

如何转换类型并最终将DataFrame转换为:

case class TopicDFRow(topic: Int, termIndices: Array[Int], termWeights: Array[Double])

我已经准备好执行转换的功能:

termIndices.substring(1, termIndices.length - 1).split(",").map(_.toInt)

我已经研究了udf和其他一些解决方案,但我坚信应该有一种更干净,更快捷的方法来执行上述转换。任何帮助将不胜感激!

1 个答案:

答案 0 :(得分:3)

当可以使用更高效的内置Spark函数时,应避免使用

UDF。据我所知,没有比提议的更好的方法了。删除字符串的第一个和最后一个字符,进行拆分和转换。

使用内置功能,可以按以下步骤进行操作:

df.withColumn("termindices", split($"termindices".substr(lit(2), length($"termindices")-2), ",").cast("array<int>"))
  .withColumn("termweights", split($"termweights".substr(lit(2), length($"termweights")-2), ",").cast("array<double>"))
  .as[TopicDFRow]

substr(如果基于1索引,以便从2开始删除第一个字符)。第二个参数是长度,而不是终点,因此是-2

最后一条命令会将数据帧转换为TopicDFRow类型的数据集。