我有一个名为“ description”值的数据框列,格式如下:
ABC XXXXXXXXXXXX STORE NAME ABC TYPE1
我想将其解析为不同的3列,如下所示
| mode | type | store | description |
|------------------------------------------------------------------------|
| ABC | TYPE1 | STORE NAME | ABC XXXXXXXXXXXX STORE NAME ABC TYPE1 |
我尝试了类似here中建议的方法。它适用于简单的UDF函数,但不适用于我编写的函数。面临的挑战是,商店的价值可能超过2个单词,或者没有固定数量的单词。
def myFunc1: (String => (String, String, String)) = { description =>
var descripe = description.split(" ")
val type = descripe(descripe.size - 1)
descripe = description.substring(description.indexOf("ABC") + 4, description.lastIndexOf("ABC")).split(" ")
val mode = descripe(0)
descripe(0) = ""
val store = descripe.mkString(" ").trim
(mode, store, type)
}
val schema = StructType(Array(
StructField("mode", StringType, true),
StructField("store", StringType, true),
StructField("type", StringType, true)
))
val myUDF = udf(myFunc1, schema)
val test = pos.withColumn("test", myUDF(col("description")))
test.printSchema()
val a =test.withColumn("mode", col("test").getItem("_1"))
.withColumn("store", col("test").getItem("_2"))
.withColumn("type", col("test").getItem("_3"))
.drop(col("test"))
a.printSchema()
a.show(5, false)
执行时出现以下错误
18/10/06 21:38:02错误执行器:阶段5.0中的任务0.0中的异常 (TID 5)org.apache.spark.SparkException:无法执行用户 定义的函数($ anonfun $ myFunc1 $ 1 $ 1:(string)=> struct(mode:string,store:string,type:string))在 org.apache.spark.sql.catalyst.expressions.GeneratedClass $ GeneratedIterator.processNext(未知 来源) org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) 在 org.apache.spark.sql.execution.WholeStageCodegenExec $$ anonfun $ 8 $ anon $ 1.hasNext(WholeStageCodegenExec.scala:395) 在 org.apache.spark.sql.execution.SparkPlan $$ anonfun $ 2.apply(SparkPlan.scala:234) 在 org.apache.spark.sql.execution.SparkPlan $$ anonfun $ 2.apply(SparkPlan.scala:228) 在 org.apache.spark.rdd.RDD $$ anonfun $ mapPartitionsInternal $ 1 $$ anonfun $ apply $ 25.apply(RDD.scala:827) 在 org.apache.spark.rdd.RDD $$ anonfun $ mapPartitionsInternal $ 1 $$ anonfun $ apply $ 25.apply(RDD.scala:827) 在 org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 在org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) 在org.apache.spark.rdd.RDD.iterator(RDD.scala:287)处 org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)在 org.apache.spark.scheduler.Task.run(Task.scala:108)在 org.apache.spark.executor.Executor $ TaskRunner.run(Executor.scala:338) 在 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 在 java.util.concurrent.ThreadPoolExecutor $ Worker.run(ThreadPoolExecutor.java:624) 在java.lang.Thread.run(Thread.java:748)造成原因: java.lang.StringIndexOutOfBoundsException:字符串索引超出范围: -4在java.lang.String.substring(String.java:1967)在com.hasif.bank.track.trasaction.TransactionParser $$ anonfun $ myFunc1 $ 1 $ 1.apply(TransactionParser.scala:26) 在 com.hasif.bank.track.trasaction.TransactionParser $$ anonfun $ myFunc1 $ 1 $ 1.apply(TransactionParser.scala:22) ...还有16个
任何对此的指点将不胜感激。
答案 0 :(得分:1)
检查一下。
scala> val df = Seq("ABC XXXXXXXXXXXX STORE NAME ABC TYPE1").toDF("desc")
df: org.apache.spark.sql.DataFrame = [desc: string]
scala> df.withColumn("mode",split('desc," ")(0)).withColumn("type",split('desc," ")(5)).withColumn("store",concat(split('desc," ")(2), lit(" "), split('desc," ")(3))).show(false)
+-------------------------------------+----+-----+----------+
|desc |mode|type |store |
+-------------------------------------+----+-----+----------+
|ABC XXXXXXXXXXXX STORE NAME ABC TYPE1|ABC |TYPE1|STORE NAME|
+-------------------------------------+----+-----+----------+
scala>
更新1:
scala> def splitStore(x:String):String=
| return x.split(" ").drop(2).init.init.mkString(" ")
splitStore: (x: String)String
scala> val mysplitstore = udf(splitStore(_:String):String)
mysplitstore: org.apache.spark.sql.expressions.UserDefinedFunction = UserDefinedFunction(<function1>,StringType,Some(List(StringType)))
scala> val df2 = Seq("ABC XXXXXXXXXXXX STORE NAME XYZ ABC TYPE1").toDF("desc")
df2: org.apache.spark.sql.DataFrame = [desc: string]
scala> val df3 = df2.withColumn("length",split('desc," "))
df3: org.apache.spark.sql.DataFrame = [desc: string, length: array<string>]
scala> val df4 = df3.withColumn("mode",split('desc," ")(size('length)-2)).withColumn("type",split('desc," ")(size('length)-1)).withColumn("store",mysplitstore('desc))
df4: org.apache.spark.sql.DataFrame = [desc: string, length: array<string> ... 3 more fields]
scala> df4.drop('length).show(false)
+-----------------------------------------+----+-----+--------------+
|desc |mode|type |store |
+-----------------------------------------+----+-----+--------------+
|ABC XXXXXXXXXXXX STORE NAME XYZ ABC TYPE1|ABC |TYPE1|STORE NAME XYZ|
+-----------------------------------------+----+-----+--------------+
scala>