我正在做这样的事情
val domainList = data1.select("columnname","domainvalues").where(col("domainvalues").isNotNull).map(r => (r.getString(0), r.getList[String](1).asScala.toList)).collect()
domainList的类型应为Array [(String,List [String])]
对于输入DF:
+-------------+----------------------------------------+
|columnname |domainvalues |
+-------------+----------------------------------------+
|predchurnrisk|Very High,High,Medium,Low |
|userstatus |Active,Lapsed,Renew |
|predinmarket |Very High,High,Medium,Low |
|predsegmentid|High flyers,Watching Pennies,Big pockets|
|usergender |Male,Female,Others |
+-------------+----------------------------------------+
我遇到的错误是
java.lang.ClassCastException: java.lang.String cannot be cast to scala.collection.Seq
at org.apache.spark.sql.Row$class.getSeq(Row.scala:283)
at org.apache.spark.sql.catalyst.expressions.GenericRow.getSeq(rows.scala:166)
at org.apache.spark.sql.Row$class.getList(Row.scala:291)
at org.apache.spark.sql.catalyst.expressions.GenericRow.getList(rows.scala:166)
at com.fis.sdi.ade.batch.SFTP.Test$$anonfun$6.apply(Test.scala:53)
at com.fis.sdi.ade.batch.SFTP.Test$$anonfun$6.apply(Test.scala:53)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.mapelements_doConsume_0$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.deserializetoobject_doConsume_0$(Unknown Source)
我应该如何解决?
答案 0 :(得分:0)
看起来您的第二列包含字符串值,您可以使用df.printSchema()
进行检查。在这种情况下,您可以尝试使用.split(",")
.map(r => (r.getString(0), r.getString(1).split(",")).collect()