我创建了如下数据框:
val bankDF = sqlContext.read.format("com.databricks.spark.csv").option("header","true").option("inferSchema","true").option("delimiter",";").load("/user/pvviswanathan_yahoo_com/Bank_Dataset.csv");
bankDF: org.apache.spark.sql.DataFrame = ["age";"job";"marital";"education";"default";"balance";"housing";"loan";"contact";"day";"month";"duration";"
campaign";"pdays";"previous";"poutcome";"y": string]
之后,当我尝试以下操作时,它将引发错误-无法在字段名称中解析列名称“ age”
bankDF.groupBy("age").count().show;
org.apache.spark.sql.AnalysisException: Cannot resolve column name "age" among ("age";"job";"marital";"education";"default";"balance";"housing";"loan
";"contact";"day";"month";"duration";"campaign";"pdays";"previous";"poutcome";"y");
答案 0 :(得分:1)
尝试使用CSV
文件时,我遇到了同样的问题。
Dataset<Row> students = spark.read().format("csv")
.option("sep", ";")
.option("inferSchema", "true")
.option("header", "true")
.load("data/students.csv");
使用拉斐尔·罗斯(Raphael Roth)的建议,我打印了Students
模式,发现Spark确实将所有列都视为一个值:
+----------------------+
|studentId, name, lname|
+----------------------+
| 1, Mickey, Mouse|
| 2, Donald, Duck|
+----------------------+
root
|-- studentId, name, lname: string (nullable = true)
我得到的错误是
无法解析(studentId,name,lname)中的列名称“ studentId”;
因此问题确实出在seperator
字符上。所以我改变了
.option("sep", ";")
成为
.option("sep", ",")
(实际上CSV分隔符是,
)
现在架构是正确的:
root
|-- studentId: integer (nullable = true)
|-- name: string (nullable = true)
|-- lname: string (nullable = true)