如何用“|”拆分字符串(管道)并将RDD转换为Dataframe

时间:2018-05-31 07:09:24

标签: scala apache-spark apache-spark-sql rdd

我正在尝试阅读包含产品信息的文本文件,并且|已分隔。当我尝试将数据读取为RDD然后使用分隔符|拆分数据时,数据会被破坏。我无法理解为什么会发生这种情况

#### 输入数据
productId|price|saleEvent|rivalName|fetchTS 
12345|78.73|Special|VistaCart.com|2017-05-11 15:39:30 
12345|45.52|Regular|ShopYourWay.com|2017-05-11 16:09:43 
12345|89.52|Sale|MarketPlace.com|2017-05-11 16:07:29 
678|1348.73|Regular|VistaCart.com|2017-05-11 15:58:06 
678|1348.73|Special|ShopYourWay.com|2017-05-11 15:44:22 
678|1232.29|Daily|MarketPlace.com|2017-05-11 15:53:03 
777|908.57|Daily|VistaCart.com|2017-05-11 15:39:01 
#### spark-shell代码
import org.apache.spark.sql.Encoder; import spark.implicits._
import org.apache.spark.sql.Encoder

case class Product(productId:Int, price:Double, saleEvent:String, rivalName:String, fetchTS:String)
val rdd = spark.sparkContext.textFile("/home/prabhat/Documents/Spark/sampledata/competitor_data_10.txt")
##########removing headers
val x = rdd.mapPartitionsWithIndex{(idx,iter) => if(idx==0)iter.drop(1) else iter}
########## why RDD **x** here is comma separated    
x.map(x=>x.split("|")).take(10)
res74: Array[Array[String]] = Array(Array(1, 2, 3, 4, 5, |, 3, 9, 9, ., 7, 3, |, S, p, e, c, i, a, l, |, V, i, s, t, a, C, a, r, t, ., c, o, m, |, 2, 0, 1, 7, -, 0, 5, -, 1, 1, " ", 1, 5, :, 3, 9, :, 3, 0, " "), Array(1, 2, 3, 4, 5, |, 3, 8, 8, ., 5, 2, |, R, e, g, u, l, a, r, |, S, h, o, p, Y, o, u, r, W, a, y, ., c, o, m, |, 2, 0, 1, 7, -, 0, 5, -, 1, 1, " ", 1, 6, :, 0, 9, :, 4, 3, " "), Array(1, 2, 3, 4, 5, |, 3, 8, 8, ., 5, 2, |, S, a, l, e, |, M, a, r, k, e, t, P, l, a, c, e, ., c, o, m, |, 2, 0, 1, 7, -, 0, 5, -, 1, 1, " ", 1, 6, :, 0, 7, :, 2, 9, " "),  ...

x.map(x=>x.split("|")).map(y => Product(y(0).toInt,y(1).toDouble,y(2),y(3),y(4))).toDF.show
+---------+-----+---------+---------+-------+
|productId|price|saleEvent|rivalName|fetchTS|
+---------+-----+---------+---------+-------+
|        1|  2.0|        3|        4|      5|
|        1|  2.0|        3|        4|      5|
|        1|  2.0|        3|        4|      5|
|        4|  3.0|        1|        5|      7|
|        4|  3.0|        1|        5|      7|
|        4|  3.0|        1|        5|      7|
|        3|  6.0|        1|        3|      0|
|        3|  6.0|        1|        3|      0|
+---------+-----+---------+---------+-------+

为什么输出如上所述,它应该与此类似

+---------+-----+---------+---------+-------+
|productId|price|saleEvent|rivalName|fetchTS|
+---------+-----+---------+---------+-------+
|    12345|78.73|  Special|   VistaCart.com|    2017-05-11 15:39:30 |
+---------+-----+---------+---------+-------+

1 个答案:

答案 0 :(得分:1)

split需要使用正则表达式,因此请使用"\\|"代替"|"

x.map(x=>x.split("\\|"))
 .map(y => Product(y(0).toInt,y(1).toDouble,y(2),y(3),y(4))
).toDF
.show(false)

这应该会给你正确的结果。

此外,如果您想最终转换为数据框,为什么不直接阅读

spark.read
  .option("header", true)
  .option("delimiter", "|")
  .schema(Encoders.product[Product].schema)
  .csv("testfile.txt")
  .as[Product]

输出:

+---------+-------+---------+---------------+--------------------+
|productId|price  |saleEvent|rivalName      |fetchTS             |
+---------+-------+---------+---------------+--------------------+
|12345    |78.73  |Special  |VistaCart.com  |2017-05-11 15:39:30 |
|12345    |45.52  |Regular  |ShopYourWay.com|2017-05-11 16:09:43 |
|12345    |89.52  |Sale     |MarketPlace.com|2017-05-11 16:07:29 |
|678      |1348.73|Regular  |VistaCart.com  |2017-05-11 15:58:06 |
|678      |1348.73|Special  |ShopYourWay.com|2017-05-11 15:44:22 |
|678      |1232.29|Daily    |MarketPlace.com|2017-05-11 15:53:03 |
|777      |908.57 |Daily    |VistaCart.com  |2017-05-11 15:39:01 |
+---------+-------+---------+---------------+--------------------+