我使用HiveContext创建了DataFrame,其中的一列保存着像这样的记录:
text1 text2
我们希望将2个文本之间的空格替换为一个文本,并最终输出为:
text1 text2
我们如何在Spark SQL中实现这一目标?请注意,我们正在使用Hive Context,注册临时表并在其上编写SQL查询。
答案 0 :(得分:1)
import org.apache.spark.sql.functions._
val myUDf = udf((s:String) => Array(s.trim.replaceAll(" +", " ")))
//error: object java.lang.String is not a value --> use Array
val data = List("i like cheese", " the dog runs ", "text111111 text2222222")
val df = data.toDF("val")
df.show()
val new_df = df
.withColumn("udfResult",myUDf(col("val")))
.withColumn("new_val", col("udfResult")(0))
.drop("udfResult")
new_df.show
数据块的输出
+--------------------+
| val|
+--------------------+
| i like cheese|
| the dog runs |
|text111111 text...|
+--------------------+
+--------------------+--------------------+
| val| new_val|
+--------------------+--------------------+
| i like cheese| i like cheese|
| the dog runs | the dog runs|
|text111111 text...|text111111 text22...|
+--------------------+--------------------+
答案 1 :(得分:1)
更好的是,我现在已经被一位真正的专家所启发。实际上更简单:
import org.apache.spark.sql.functions._
// val myUDf = udf((s:String) => Array(s.trim.replaceAll(" +", " ")))
val myUDf = udf((s:String) => s.trim.replaceAll("\\s+", " ")) // <-- no Array(...)
// Then there is no need to play with columns excessively:
val data = List("i like cheese", " the dog runs ", "text111111 text2222222")
val df = data.toDF("val")
df.show()
val new_df = df.withColumn("new_val", myUDf(col("val")))
new_df.show
答案 2 :(得分:0)
只需在 spark.sql 中执行
regexp_replace( COLUMN, ' +', ' ')
https://spark.apache.org/docs/latest/api/sql/index.html#regexp_replace
检查一下:
spark.sql("""
select regexp_replace(col1, ' +', ' ') as col2
from (
select 'text1 text2 text3' as col1
)
""").show(20,False)
输出
+-----------------+
|col2 |
+-----------------+
|text1 text2 text3|
+-----------------+