Scala-如何过滤RDD org.apache.spark.rdd.RDD [String]]

时间:2017-11-22 19:57:28

标签: scala apache-spark rdd

我有一个需要按价格过滤的RDD。这是rdd

id      category_id       product_name                               price   
1       2            Quest Q64 10 FT. x 10 FT. Slant Leg Instant U   59.98
2       2            Under Armour Men's Highlight MC Football Clea   129.99
3       2            Under Armour Men's Renegade D Mid Football Cl   89.99
4       2            Under Armour Men's Renegade D Mid Football Cl   89.99
5       2            Riddell Youth Revolution Speed Custom Footbal   199.99
6       2            Jordan Men's VI Retro TD Football Cleat         134.99  
7       2            Schutt Youth Recruit Hybrid Custom Football H   99.99
8       2            Nike Men's Vapor Carbon Elite TD Football Cle   129.99
9       2            Nike Adult Vapor Jet 3.0 Receiver Gloves        50.0

我收到以下错误

scala> val rdd2 = rdd1.map(.split("\t")).map(c => c(3) < 100) 
<console>:44: error: type mismatch; found : Int(100) required: String val rdd2 = rdd1.map(.split("\t")).map(c => c(3) < 100)

df.printSchema()

root |-- id: integer (nullable = true) 
     |-- category_id: integer (nullable = true) 
     |-- product_name: string (nullable = true) 
     |-- price: double (nullable = true) 
     |-- image: string (nullable = true)

2 个答案:

答案 0 :(得分:0)

根据您的df.printSchema(),您可以使用price

上的查询过滤您的表格
df.filter(df.col("price") < 100).show

答案 1 :(得分:0)

您可以使用sparkContext.textfile简单地阅读文件,并执行以下计算

val rdd1 = sparkSession.sparkContext.textFile("text file location")
val rdd2 = rdd1.map(_.split("\t")).filter(c => !"price".equalsIgnoreCase(c(3).trim)).filter(c => c(3).toDouble < 100)

如果您已经有dataframe,那么您就不需要将它们转换回rdd进行计算。你可以在filter本身上dataframe

val finaldf = df.filter($"price" =!= "price").filter($"price".cast(DoubleType) < 100)