如何更改pyspark数据框列数据类型?

时间:2017-09-26 17:46:03

标签: dataframe casting pyspark

我正在寻找改变pyspark数据帧列类型的方法

来自

df.printSchema()

enter image description here

enter image description here

提前感谢您的帮助。

2 个答案:

答案 0 :(得分:3)

您必须使用新架构替换该列。 ArrayType采用两个参数elementType和containsNull。

from pyspark.sql.types import *
from pyspark.sql.functions import udf
x = [("a",["b","c","d","e"]),("g",["h","h","d","e"])]
schema = StructType([StructField("key",StringType(), nullable=True),
                     StructField("values", ArrayType(StringType(), containsNull=False))])

df = spark.createDataFrame(x,schema = schema)
df.printSchema()
new_schema = ArrayType(StringType(), containsNull=True)
udf_foo = udf(lambda x:x, new_schema)
df.withColumn("values",udf_foo("values")).printSchema()



root
 |-- key: string (nullable = true)
 |-- values: array (nullable = true)
 |    |-- element: string (containsNull = false)

root
 |-- key: string (nullable = true)
 |-- values: array (nullable = true)
 |    |-- element: string (containsNull = true)

答案 1 :(得分:0)

这是一个有用的示例,您可以在其中更改每列的架构 假设您想要相同的类型

from pyspark.sql.types import Row
from pyspark.sql.functions import *
df = sc.parallelize([
Row(isbn=1, count=1, average=10.6666666),
Row(isbn=2, count=1, average=11.1111111)
]).toDF()

df.printSchema()
df=df.select(*[col(x).cast('float') for x in df.columns]).printSchema()

输出:

  root
  |-- average: double (nullable = true)
  |-- count: long (nullable = true)
  |-- isbn: long (nullable = true)
  root
  |-- average: float (nullable = true)
  |-- count: float (nullable = true)
  |-- isbn: float (nullable = true)