如何在DataFrame的列中存储numpy.ndarray

时间:2018-08-01 18:30:15

标签: python numpy apache-spark pyspark spark-structured-streaming

在结构化流中,如何使用UDF创建两个带有两个元素的numpy.ndarray的新列?

这是我到目前为止所拥有的:

schema = StructType([
    StructField("host_id", LongType()),
    StructField("fence_id", LongType()),
    StructField("policy_id", LongType()),
    StructField("timestamp", LongType()),
    StructField("distances", ArrayType(LongType()))
])

ds = spark \
    .readStream \
    .format("json") \
    .schema(schema) \
    .load("data/")

ds.printSchema()
pa = PosAlgorithm()
get_distance_udf = udf(lambda y: pa.getLocation(y), ArrayType(LongType()))
dfnew = ds.withColumn("location", get_distance_udf(col("distances")))

query = dfnew \
    .writeStream \
    .format('console') \
    .start()

query.awaitTermination()

函数pa.getLocation返回numpy.ndarray,例如[42.15999863, 2.08498164]。我想将这些数字存储在DataFrame dfnewlatitude的两个新列中。

1 个答案:

答案 0 :(得分:3)

替换

get_distance_udf = udf(lambda y: pa.getLocation(y), ArrayType(LongType()))

使用

get_distance_udf = udf(
     lambda y: pa.getLocation(y).tolist(), 
     StructType([
         StructField("latitude", DoubleType()), 
         StructField("longitude", DoubleType())
     ])
)

,然后根据需要扩展结果:

from pyspark.sql.functions import col

(ds
    .withColumn("location", get_distance_udf(col("distances")))
    .withColumn("latitude", col("location.latitude"))
    .withColumn("longitude", col("location.longitude")))