我有以下Spark DataFrame:
agent_product_sale=data.frame(agent=c('a','b','c','d','e','f','a','b','c','a','b'),
product=c('P1','P2','P3','P4','P1','p1','p2','p2','P2','P3','P3'),
sale_amount=c(1000,2000,3000,4000,1000,1000,2000,2000,2000,3000,3000))
RDD_aps=createDataFrame(sqlContext,agent_product_sale)
agent product sale_amount
1 a P1 1000
2 b P1 1000
3 c P3 3000
4 d P4 4000
5 d P1 1000
6 c P1 1000
7 a P2 2000
8 b P2 2000
9 c P2 2000
10 a P4 4000
11 b P3 3000
我需要按代理对Spark DataFrame进行分组,并为每个代理找到销售额最高的产品
agent most_expensive
a P4
b P3
c P3
d P4
我使用以下代码,但它会返回每个代理商的最大sale_amount
schema <- structType(structField("agent", "string"),
structField("max_sale_amount", "double"))
result <- gapply(
RDD_aps,
c("agent"),
function(key, x) {
y <- data.frame(key,max(x$sale_amount), stringsAsFactors = FALSE)
}, schema)
答案 0 :(得分:1)
ar1 <- arrange(RDD_aps,desc(RDD_aps$sale_amount))
collect(summarize(groupBy(ar1,ar1$agent),most_expensive=first(ar1$product)))
答案 1 :(得分:0)
使用tapply()或aggregate()可以找到组中的最大值
agent_product_sale=data.frame(agent=c('a','b','c','d','e','f','a','b','c','a','b'),
+ product=c('P1','P2','P3','P4','P1','p1','p2','p2','P2','P3','P3'),
+ sale_amount=c(1000,2000,3000,4000,1000,1000,2000,2000,2000,3000,3000))
tapply(agent_product_sale$sale_amount,agent_product_sale$agent, max)
a b c d e f
3000 3000 3000 4000 1000 1000
aggregate(agent_product_sale$sale_amount,by=list(agent_product_sale$agent), max)
Group.1 x
1 a 3000
2 b 3000
3 c 3000
4 d 4000
5 e 1000
6 f 1000
aggregate会返回一个data.frame并为数组提供一个数组,由您自己决定,继续使用结果。