如何在PySpark中过滤MapType中的键?

时间:2017-06-22 17:44:50

标签: apache-spark pyspark apache-spark-sql

如果有一个DataFrame,可以在PySpark中过滤掉Column 集合(MapType(StringType,StringType,True))的一些键,同时保持模式的完整性吗?

root
 |-- id: string (nullable = true)
 |-- collection: map (nullable = true)
 |    |-- key: string
 |    |-- value: string

3 个答案:

答案 0 :(得分:5)

是的,这是可能的。您应该创建udf负责过滤地图中的密钥,并将其与withColumn转换结合使用,以过滤来自collection字段的密钥。

下面Scala中的示例实现:

// Start from implementing method in Scala responsible for filtering keys from Map
def filterKeys(collection: Map[String, String], keys: Iterable[String]): Map[String, String] =
    collection.filter{case (k,_) => !keys.exists(_ == k)}

// Create Spark UDF based on above function
val filterKeysUdf = udf((collection: Map[String, String], keys: Iterable[String]) => filterKeys(collection, keys))

// Use above udf to filter keys
val newDf = df.withColumn("collection", filterKeysUdf(df("collection"), lit(Array("k1"))))

在Python中实现:

# Start from implementing method in Python responsible for filtering keys from dict
def filterKeys(collection, keys):
    return {k:collection[k] for k in collection if k not in keys}

# Create Spark UDF based on above function
filterKeysUdf = udf(filterKeys, MapType(StringType(), StringType()))

# Create array literal based on Python list
keywords_lit = array(*[lit(k) for k in ["k1","k2"]])

# Use above udf to filter keys
newDf = df.withColumn("collection", filterKeysUdf(df.collection, keywords_lit))

答案 1 :(得分:0)

我只是想补充一下PiotrKalański所说的内容,以防您想要过滤空值。

def filterValue(collection):
  return {k:collection[k] for k in collection if collection[k]}

filterValuesUdf = F.udf(filterValue, MapType(StringType(), StringType()))

newDf = source_map_df.withColumn("collection", filterValuesUdf(source_map_df.f))

答案 2 :(得分:0)

在 3.1 版中,您可以使用 map_filter 执行此操作:

import pyspark.sql.functions as f

df.withColumn("filtered_map", f.map_filter("map_col", lambda _, v: v is not None))