在pyspark中,如何为数据类型为

时间:2019-07-17 18:07:35

标签: apache-spark pyspark apache-spark-sql apache-spark-mllib

我希望col4和col5应该作为ArrayType出现,而它们作为StringType出现。它在pyspark中。 我想知道我们该怎么做。

col4 --array (nullable = true)
      |-- element: IntegerType() (containsNull = true)
col5:--array (nullable = true)
      |-- element: string (containsNull = true)

+---+-----------+
| id|      value|
+---+-----------+
|  1| [foo, foo]|
|  2|[bar, tooo]|
+---+-----------+

+---+-----------+---------------------+
|id |value      |TF_CUS(value)        |
+---+-----------+---------------------+
|1  |[foo, foo] |[[foo], [2]]         |
|2  |[bar, tooo]|[[bar, tooo], [1, 1]]|
+---+-----------+---------------------+

+---+-----------+---------------------+------+-----------+
|id |value      |TF_CUS               |col4  |col5       |
+---+-----------+---------------------+------+-----------+
|1  |[foo, foo] |[[foo], [2]]         |[2]   |[foo]      |
|2  |[bar, tooo]|[[bar, tooo], [1, 1]]|[1, 1]|[bar, tooo]|
+---+-----------+---------------------+------+-----------+

期待看到解决方案         根          |-id:长(nullable = true)          |-值:数组(nullable = true)          | |-元素:字符串(containsNull = true)          |-TF_CUS:数组(可空= true)          | |-元素:字符串(containsNull = true)enter code here          |-col4:字符串(nullable = true)          |-col5:字符串(nullable = true)

from pyspark.sql.types import *
from pyspark.sql.functions import udf
from pyspark.sql.types import StructType
from pyspark.sql.types import StructField
from pyspark.sql.types import StringType
from pyspark.sql.types import DoubleType
from pyspark.sql.types import ArrayType

def TF_CUS(lista):
    from collections import Counter
    counts = (Counter(lista))
    return (list(counts.keys()), list(counts.values()))

TF_CUS_cols = udf(TF_CUS, ArrayType(StringType()))

df = sc.parallelize([(1, ["foo","foo"] ), (2, ["bar", "tooo"])]).toDF(["id", "value"])
df.show()
df.select("*", TF_CUS_cols(df["value"])).show(2, False)
df = df.select("*", TF_CUS_cols(df["value"]).alias("TF_CUS"))
df.withColumn("col4", df["TF_CUS"].getItem(1)).withColumn("col5", df["TF_CUS"].getItem(0)).show(2, False)
df = df.withColumn("col4", (df["TF_CUS"].getItem(1))).withColumn("col5", df["TF_CUS"].getItem(0))

1 个答案:

答案 0 :(得分:0)

在col4的情况下,您将必须使用该列,并进行简单的强制转换以键入array(int)。

import pyspark.sql.functions as F

df = df.withColumn("col6", Fcol("col4").cast("array<int>"))