如何获得火花行的value_counts?

时间:2019-10-07 20:08:58

标签: dataframe apache-spark pyspark

我有一个Spark数据框,其中的3列存储3个不同的预测。我想知道每个输出值的计数,以便选择获得最大次数的值作为最终输出。

通过在每一行调用我的lambda函数以获取value_counts,我可以轻松地在pandas中做到这一点,如下所示。我已经在这里将spark df转换为pandas df,但是我需要能够直接在spark df上执行类似的操作。

r=[Row(run_1=1, run_2=2, run_3=1, name='test run', id=1)]
df1=spark.createDataFrame(r)
df1.show()
df2=df1.toPandas()
r=df2.iloc[0]
val_counts=r[['run_1','run_2','run_3']].value_counts()
print(val_counts)
top_val=val_counts.index[0] 
top_val_cnt=val_counts.values[0]
print('Majority output = %s, occured %s out of 3 times'%(top_val,top_val_cnt))

输出告诉我值1出现的次数最多-在这种情况下为两次-

+---+--------+-----+-----+-----+
| id|    name|run_1|run_2|run_3|
+---+--------+-----+-----+-----+
|  1|test run|    1|    2|    1|
+---+--------+-----+-----+-----+

1    2
2    1
Name: 0, dtype: int64

Majority output = 1, occured 2 out of 3 times

我正在尝试编写一个udf函数,该函数可以使用df1的每一行并获取top_val和top_val_cnt。有没有办法使用spark df实现此目的?

2 个答案:

答案 0 :(得分:1)

python的代码应该相似,也许会对您有所帮助

  val df1 = Seq((1, 1, 1, 2), (1, 2, 3, 3), (2, 2, 2, 2)).toDF()
  df1.show()
  df1.select(array('*)).map(s=>{
    val list = s.getList(0)
    (list.toString(),list.toArray.groupBy(i => i).mapValues(_.size).toList.toString())
  }).show(false)

输出:

+---+---+---+---+
| _1| _2| _3| _4|
+---+---+---+---+
|  1|  1|  1|  2|
|  1|  2|  3|  3|
|  2|  2|  2|  2|
+---+---+---+---+

+------------+-------------------------+
|_1          |_2                       |
+------------+-------------------------+
|[1, 1, 1, 2]|List((2,1), (1,3))       |
|[1, 2, 3, 3]|List((2,1), (1,1), (3,2))|
|[2, 2, 2, 2]|List((2,4))              |
+------------+-------------------------+

答案 1 :(得分:1)

让我们拥有与您类似的测试数据框。

 list = [(1,'test run',1,2,1),(2,'test run',3,2,3),(3,'test run',4,4,4)]
df=spark.createDataFrame(list, ['id', 'name','run_1','run_2','run_3'])

newdf = df.rdd.map(lambda x : (x[0],x[1],x[2:])) \
.map(lambda x : (x[0],x[1],x[2][0],x[2][1],x[2][2],[max(set(x[2]),key=x[2].count )])) \
.toDF(['id','test','run_1','run_2','run_3','most_frequent'])


>>> newdf.show()
+---+--------+-----+-----+-----+-------------+
| id|    test|run_1|run_2|run_3|most_frequent|
+---+--------+-----+-----+-----+-------------+
|  1|test run|    1|    2|    1|          [1]|
|  2|test run|    3|    2|    3|          [3]|
|  3|test run|    4|    4|    4|          [4]|
+---+--------+-----+-----+-----+-------------+

或者您需要处理列表中每个项目都不相同的情况。即返回null。

list = [(1,'test run',1,2,1),(2,'test run',3,2,3),(3,'test run',4,4,4),(4,'test run',1,2,3)]
df=spark.createDataFrame(list, ['id', 'name','run_1','run_2','run_3'])

from pyspark.sql.functions import udf

@udf
def most_frequent(*mylist): 
    counter = 1
    num = mylist[0] 

    for i in mylist: 
        curr_frequency = mylist.count(i) 
        if(curr_frequency> counter): 
            counter = curr_frequency 
            num = i 

            return num
    else:
            return None

将计数器初始化为“ 1”,如果仅大于“ 1”,则返回计数。

df.withColumn('most_frequent', most_frequent('run_1', 'run_2', 'run_3')).show()

+---+--------+-----+-----+-----+-------------+
| id|    name|run_1|run_2|run_3|most_frequent|
+---+--------+-----+-----+-----+-------------+
|  1|test run|    1|    2|    1|            1|
|  2|test run|    3|    2|    3|            3|
|  3|test run|    4|    4|    4|            4|
|  4|test run|    1|    2|    3|         null|
+---+--------+-----+-----+-----+-------------+
+---+--------+-----+-----+-----+----+