在pyspark数据框的其余列中搜索column1中的值

时间:2019-03-06 19:49:42

标签: python search pyspark

假设存在以下形式的pyspark数据框:

id  col1  col2 col3 col4
------------------------
as1  4    10    4    6
as2  6    3     6    1
as3  6    0     2    1
as4  8    8     6    1
as5  9    6     6    9

是否可以在pyspark数据框的 col 2-4 中搜索 col1 中的值并返回 (id行名,列名) ? 例如:

In col1, 4 is found in (as1, col3)
In col1, 6 is found in (as2,col3),(as1,col4),(as4, col3) (as5,col3)
In col1, 8 is found in (as4,col2)
In col1, 9 is found in (as5,col4)

提示:假设col1将是一组{4,6,8,9},即唯一

2 个答案:

答案 0 :(得分:1)

是的,您可以利用Spark SQL .isin运算符。

让我们首先在您的示例中创建DataFrame

第1部分-创建数据框

cSchema = StructType([StructField("id", IntegerType()),\
StructField("col1", IntegerType()),\
StructField("col2", IntegerType()),\
StructField("col3", IntegerType()),\
StructField("col4", IntegerType())])


test_data = [[1,4,10,4,6],[2,6,3,6,1],[3,6,0,2,1],[4,8,8,6,1],[5,9,6,6,9]]


df = spark.createDataFrame(test_data,schema=cSchema)

df.show()

+---+----+----+----+----+
| id|col1|col2|col3|col4|
+---+----+----+----+----+
|  1|   4|  10|   4|   6|
|  2|   6|   3|   6|   1|
|  3|   6|   0|   2|   1|
|  4|   8|   8|   6|   1|
|  5|   9|   6|   6|   9|
+---+----+----+----+----+

第2部分-搜索匹配值的功能

isin :一个布尔表达式,如果该表达式的值包含在参数的求值中,则该表达式为true。 http://spark.apache.org/docs/2.1.0/api/python/pyspark.sql.html

def search(col1,col3):
    col1_list = df.select(col1).rdd\
    .map(lambda x: x[0]).collect()
    search_results = df[df[col3].isin(col1_list)]
    return search_results

search_results.show()

+---+----+----+----+----+
| id|col1|col2|col3|col4|
+---+----+----+----+----+
|  1|   4|  10|   4|   6|
|  2|   6|   3|   6|   1|
|  4|   8|   8|   6|   1|
|  5|   9|   6|   6|   9|
+---+----+----+----+----+

这应该指导您正确的方向。您可以仅选择“ ID列”等。或尝试返回的任何内容。可以轻松更改此功能以获取更多列进行搜索。希望这会有所帮助!

答案 1 :(得分:0)

# create structfield using array list
cSchema = StructType([StructField("id", StringType()),
                      StructField("col1", IntegerType()),
                      StructField("col2", IntegerType()),
                      StructField("col3", IntegerType()),
                      StructField("col4", IntegerType())])

test_data = [['as1', 4, 10, 4, 6],
             ['as2', 6, 3, 6, 1],
             ['as3', 6, 0, 2, 1],
             ['as4', 8, 8, 6, 1],
             ['as5', 9, 6, 6, 9]]

# create pyspark dataframe
df = spark.createDataFrame(test_data, schema=cSchema)

df.show()

# obtain the distinct items for col 1
distinct_list = [i.col1 for i in df.select("col1").distinct().collect()]
# rest columns
col_list = ['id', 'col2', 'col3', 'col4']

# implement the search of values in rest columns found in col 1
def search(distinct_list ):
    for i in distinct_list :
        print(str(i) + ' found in: ')

        # for col in df.columns:
        for col in col_list:
            df_search = df.select(*col_list) \
                .filter(df[str(col)] == str(i))

            if (len(df_search.head(1)) > 0):
                df_search.show()


search(distinct_list)

GITHUB上找到完整的示例代码

Output:

+---+----+----+----+----+
| id|col1|col2|col3|col4|
+---+----+----+----+----+
|as1|   4|  10|   4|   6|
|as2|   6|   3|   6|   1|
|as3|   6|   0|   2|   1|
|as4|   8|   8|   6|   1|
|as5|   9|   6|   6|   9|
+---+----+----+----+----+

6 found in: 
+---+----+----+----+
| id|col2|col3|col4|
+---+----+----+----+
|as5|   6|   6|   9|
+---+----+----+----+

+---+----+----+----+
| id|col2|col3|col4|
+---+----+----+----+
|as2|   3|   6|   1|
|as4|   8|   6|   1|
|as5|   6|   6|   9|
+---+----+----+----+

+---+----+----+----+
| id|col2|col3|col4|
+---+----+----+----+
|as1|  10|   4|   6|
+---+----+----+----+

9 found in: 
+---+----+----+----+
| id|col2|col3|col4|
+---+----+----+----+
|as5|   6|   6|   9|
+---+----+----+----+

4 found in: 
+---+----+----+----+
| id|col2|col3|col4|
+---+----+----+----+
|as1|  10|   4|   6|
+---+----+----+----+

8 found in: 
+---+----+----+----+
| id|col2|col3|col4|
+---+----+----+----+
|as4|   8|   6|   1|
+---+----+----+----+