在DF中从String转换为Int会导致null而不是数字

时间:2017-11-13 02:53:42

标签: scala apache-spark spark-dataframe

这是我的代码示例:

import org.apache.spark.sql.functions._
import org.apache.spark.sql.types.IntegerType
val marketingproj5DF2 = marketingproj5DF.withColumn("ageTmp", 'age.cast(IntegerType)).drop("age").withColumnRenamed("ageTmp","age")

以下是DF之后的样子:

scala> marketingproj5DF2.show(5)

+--------+----------------+-----------+-------------+-----------+-----------+-----------+--------+-----------+-------+---------+------------+------------+---------
+------------+------------+-------+------+
|     age|             job|    marital|    education|    default|    balance|    housing|    loan|    contact|    day|    month|    duration|    campaign|    pdays
|    previous|    poutcome|      y|ageTmp|
+--------+----------------+-----------+-------------+-----------+-----------+-----------+--------+-----------+-------+---------+------------+------------+---------
+------------+------------+-------+------+
|"""age""|         ""job""|""marital""|""education""|""default""|""balance""|""housing""|""loan""|""contact""|""day""|""month""|""duration""|""campaign""|""pdays""
|""previous""|""poutcome""| ""y"""|  null|
|     "58|  ""management""|""married""| ""tertiary""|     ""no""|       2143|    ""yes""|  ""no""|""unknown""|      5|  ""may""|         261|           1|       -1
|           0| ""unknown""|""no"""|  null|
|     "44|  ""technician""| ""single""|""secondary""|     ""no""|         29|    ""yes""|  ""no""|""unknown""|      5|  ""may""|         151|           1|       -1
|           0| ""unknown""|""no"""|  null|
|     "33|""entrepreneur""|""married""|""secondary""|     ""no""|          2|    ""yes""| ""yes""|""unknown""|      5|  ""may""|          76|           1|       -1
|           0| ""unknown""|""no"""|  null|
|     "47| ""blue-collar""|""married""|  ""unknown""|     ""no""|       1506|    ""yes""|  ""no""|""unknown""|      5|  ""may""|          92|           1|       -1
|           0| ""unknown""|""no"""|  null|
+--------+----------------+-----------+-------------+-----------+-----------+-----------+--------+-----------+-------+---------+------------+------------+---------
+------------+------------+-------+------+
only showing top 5 rows

我正在使用Spark 1.6 Scala 2.10.5。第一列是我原来的“年龄”列,数据是从.csv导入的,我无法将所有数据都输入DF,除非我把它留作字符串,现在我已经有了“age”列,我正在尝试转换/转换字段并对其进行查询。

2 个答案:

答案 0 :(得分:0)

问题是由于年龄栏中的额外"造成的。在将列转换为Int之前需要将其删除。此外,您不需要使用临时列,删除原始列,然后将临时列重命名为原始名称。只需使用withColumn()覆盖原始文件即可。

regexp_replace可以解决额外的"问题:

val df = Seq("\"58","\"44","\"33","\"47").toDF("age")
val df2 = df.withColumn("age", regexp_replace($"age", "\"", "").cast(IntegerType))

这将产生预期的结果:

+---+
|age|
+---+
| 58|
| 44|
| 33|
| 47|
+---+

答案 1 :(得分:-1)

import org.apache.spark.sql

val marketingproj5DF2 = marketingproj5DF.withColumn(" age",$" age" .cast(sql.types.IntegerType))