我有一个Spark Dataframe,如下所示,我正在尝试从变量中添加新的日期列,但出现错误。
jsonDF.printSchema()
root
|-- Data: struct (nullable = true)
| |-- Record: struct (nullable = true)
| | |-- FName: string (nullable = true)
| | |-- LName: long (nullable = true)
| | |-- Address: struct (nullable = true)
| | | |-- Applicant: array (nullable = true)
| | | | |-- element: struct (containsNull = true)
| | | | | |-- Id: long (nullable = true)
| | | | | |-- Type: string (nullable = true)
| | | | | |-- Option: long (nullable = true)
| | | |-- Location: string (nullable = true)
| | | |-- Town: long (nullable = true)
| | |-- IsActive: boolean (nullable = true)
|-- Id: string (nullable = true)
尝试了两种方式-
var_date='2019-07-15'
jsonDF.withColumn('my_date',to_date(var_date,'yyyy-MM-dd'))
jsonDF.select(to_date(var_date,'yyyy-MM-dd')).alias('my_date')
但是我得到一个错误
An error occurred while calling o50.withColumn.
: org.apache.spark.sql.AnalysisException: cannot resolve '`2019-07-15`' given input columns: [Data, Id];;
'Project [Data#8, Id#9, to_date('2019-07-15, Some(yyyy-MM-dd)) AS my_date#213]
+- Relation[Data#8, Id#11] json
An error occurred while calling o50.select.
: org.apache.spark.sql.AnalysisException: cannot resolve '`2019-07-15`' given input columns: [Data, Id];;
'Project [to_date('2019-07-15, Some(yyyy-MM-dd)) AS to_date(`2019-07-15`, 'yyyy-MM-dd'#210]
请帮助。
答案 0 :(得分:1)
根据官方文档,to_date
将一列作为参数。因此,它试图获取名为2019-07-15
的列。
您必须先将值转换为列,然后再应用函数。
from pyspark.sql import functions as F
var_date='2019-07-15'
jsonDF.select(F.to_date(F.lit(var_date),'yyyy-MM-dd').alias('my_date'))
或另一种方法是直接使用python datetime。
import datetime
from pyspark.sql import functions as F
var_date=datetime.date(2019,7,15)
jsonDF.select(F.lit(var_date).alias('my_date'))