在Spark中使用日期

时间:2015-09-03 04:13:52

标签: date apache-spark filtering rdd

我要求解析CSV文件,识别特定日期之间的记录,并查找该持续时间内每个ProductCategory的每个销售人员的总销售额和平均销售额。以下是CSV文件结构:

SalesPersonId,SalesPersonName,SaleDate,SaleAmount,产品分类

请帮助解决此查询。在Scala中寻找解决方案

我尝试了什么:

使用SimpleDateFormat,如下所述: val format = new java.text.SimpleDateFormat(“MM / dd / yyyy”) 并使用下面的代码创建了一个RDD: val onlyHouseLoan = readFile.map(line =>(line.split(“,”)(0),line.split(“,”)(2),line.split(“,”)(3).toLong,的 format.parse(line.split( “”)(4)的ToString())))

但是,我尝试在突出显示的表达式顶部使用Calendar,但收到NumberformatExpression的错误。

1 个答案:

答案 0 :(得分:0)

所以只需创建一个你描述的csv文件格式的快速rdd

val list = sc.parallelize(List(("1","Timothy","04/02/2015","100","TV"), ("1","Timothy","04/03/2015","10","Book"), ("1","Timothy","04/03/2015","20","Book"), ("1","Timothy","04/05/2015","10","Book"),("2","Ursula","04/02/2015","100","TV")))

然后运行

import java.time.LocalDate
import java.time.format.DateTimeFormatter

val startDate = LocalDate.of(2015,1,4)
val endDate = LocalDate.of(2015,4,5)

val result = list
    .filter{case(_,_,date,_,_) => {
         val localDate = LocalDate.parse(date, DateTimeFormatter.ofPattern("MM/dd/yyyy"))
         localDate.isAfter(startDate) && localDate.isBefore(endDate)}}
    .map{case(id, _, _, amount, category) => ((id, category), (amount.toDouble, 1))} 
    .reduceByKey((v1, v2) => (v1._1 + v2._1, v1._2 + v2._2)) 
    .map{case((id, category),(total, sales)) => (id, List((category, total, total/sales)))} 
    .reduceByKey(_ ++ _)

会给你

(1,List((Book,30.0,15.0), (TV,100.0,100.0)))
(2,List((TV,100.0,100.0)))

格式为(SalesPersonId,[(ProductCategory,TotalSaleAmount,AvgSaleAmount)]。这就是你要找的吗?