我正在尝试将数据集的列转换为真实年龄。 我将Scala与Spark一起使用,并且我的项目在IntelliJ上。
这是示例数据集
TotalCost|BirthDate|Gender|TotalChildren|ProductCategoryName
1000||Male|2|Technology
2000|1957-03-06||3|Beauty
3000|1959-03-06|Male||Car
4000|1953-03-06|Male|2|
5000|1957-03-06|Female|3|Beauty
6000|1959-03-06|Male|4|Car
7000|1957-03-06|Female|3|Beauty
8000|1959-03-06|Male|4|Car
这是Scala的代码
import org.apache.spark.sql.SparkSession
object DataFrameFromCSVFile2 {
def main(args:Array[String]):Unit= {
val spark: SparkSession = SparkSession.builder()
.master("local[1]")
.appName("SparkByExample")
.getOrCreate()
val filePath="src/main/resources/demodata.txt"
val df = spark.read.options(Map("inferSchema"->"true","delimiter"->"|","header"->"true")).csv(filePath).select("Gender", "BirthDate", "TotalCost", "TotalChildren", "ProductCategoryName")
val df2 = df
.filter("Gender is not null")
.filter("BirthDate is not null")
.filter("TotalChildren is not null")
.filter("ProductCategoryName is not null")
df2.show()
因此,我试图将1957-03-06的年龄转换为“专栏”中的61岁
任何想法都会有很大帮助
非常感谢您
答案 0 :(得分:2)
这是在UDF中将java.time
API与Spark的内置when/otherwise
一起用于null检查的一种方式:
val currentAge = udf{ (dob: java.sql.Date) =>
import java.time.{LocalDate, Period}
Period.between(dob.toLocalDate, LocalDate.now).getYears
}
df.withColumn("CurrentAge", when($"BirthDate".isNotNull, currentAge($"BirthDate"))).
show(5)
// +------+-------------------+---------+-------------+-------------------+----------+
// |Gender| BirthDate|TotalCost|TotalChildren|ProductCategoryName|CurrentAge|
// +------+-------------------+---------+-------------+-------------------+----------+
// | Male| null| 1000| 2| Technology| null|
// | null|1957-03-06 00:00:00| 2000| 3| Beauty| 61|
// | Male|1959-03-06 00:00:00| 3000| null| Car| 59|
// | Male|1953-03-06 00:00:00| 4000| 2| null| 65|
// |Female|1957-03-06 00:00:00| 5000| 3| Beauty| 61|
// +------+-------------------+---------+-------------+-------------------+----------+
答案 1 :(得分:2)
您可以使用内置函数-months_between()或datediff()。检查一下
scala> val df = Seq("1957-03-06","1959-03-06").toDF("date")
df: org.apache.spark.sql.DataFrame = [date: string]
scala> df.show(false)
+----------+
|date |
+----------+
|1957-03-06|
|1959-03-06|
+----------+
scala> df.withColumn("age",months_between(current_date,'date)/12).show
+----------+------------------+
| date| age|
+----------+------------------+
|1957-03-06|61.806451612500005|
|1959-03-06|59.806451612500005|
+----------+------------------+
scala> df.withColumn("age",datediff(current_date,'date)/365).show
+----------+-----------------+
| date| age|
+----------+-----------------+
|1957-03-06|61.85205479452055|
|1959-03-06|59.85205479452055|
+----------+-----------------+
scala>
答案 2 :(得分:0)
您可以使用Java日历库获取时区中的当前日期,以计算年龄。您可以使用udf来做到这一点。 例如
import java.time.ZoneId
import java.util.Calendar
val data = Seq("1957-03-06","1959-03-06").toDF("date")
val ageudf = udf((inputDate:String)=>{
val format = new java.text.SimpleDateFormat("yyyy-MM-dd")
val birthDate = format.parse(inputDate).toInstant.atZone(ZoneId.systemDefault()).toLocalDate
val currentDate = Calendar.getInstance().getTime..toInstant.atZone(ZoneId.systemDefault()).toLocalDate
import java.time.Period
if((birthDate != null) && (currentDate != null)) Period.between(birthDate,currentDate).getYears
else 0
})
data.withColumn("age",ageUdf($"date")).show()
输出将是:
date|age
1957-03-06|61
1959-03-06|59