这个问题基本上是this question的重复项,除了我在R中工作。pyspark解决方案看起来很可靠,但是我无法弄清楚如何在窗口上应用collect_list
在sparklyr中以相同的方式起作用。
我有一个具有以下结构的Spark DataFrame:
------------------------------
userid | date | city
------------------------------
1 | 2018-08-02 | A
1 | 2018-08-03 | B
1 | 2018-08-04 | C
2 | 2018-08-17 | G
2 | 2018-08-20 | E
2 | 2018-08-23 | F
我试图按userid
对DataFrame进行分组,按date
对每个组进行排序,然后将city
列折叠成其值的串联。所需的输出:
------------------
userid | cities
------------------
1 | A, B, C
2 | G, E, F
问题在于,我尝试使用的每种方法都导致某些用户(在测试5000个用户时大约占3%)的“城市”列的排列顺序不正确。
尝试1:使用dplyr
和collect_list
。
my_sdf %>%
dplyr::group_by(userid) %>%
dplyr::arrange(date) %>%
dplyr::summarise(cities = paste(collect_list(city), sep = ", ")))
尝试2:使用replyr::gapply
,因为该操作符合“ Grouped-Order-Apply”的描述。
get_cities <- . %>%
summarise(cities = paste(collect_list(city), sep = ", "))
my_sdf %>%
replyr::gapply(gcolumn = "userid",
f = get_cities,
ocolumn = "date",
partitionMethod = "group_by")
尝试3:作为SQL窗口函数编写。
my_sdf %>%
spark_session(sc) %>%
sparklyr::invoke("sql",
"SELECT userid, CONCAT_WS(', ', collect_list(city)) AS cities
OVER (PARTITION BY userid
ORDER BY date)
FROM my_sdf") %>%
sparklyr::sdf_register() %>%
sparklyr::sdf_copy_to(sc, ., "my_sdf", overwrite = T)
^引发以下错误:
Error: org.apache.spark.sql.catalyst.parser.ParseException:
mismatched input 'OVER' expecting <EOF>(line 2, pos 19)
== SQL ==
SELECT userid, conversion_location, CONCAT_WS(' > ', collect_list(channel)) AS path
OVER (PARTITION BY userid, conversion_location
-------------------^^^
ORDER BY occurred_at)
FROM paths_model
答案 0 :(得分:0)
解决了!我误解了collect_list()和Spark SQL如何一起工作。我没有意识到可以返回列表,我认为连接必须在查询中进行。以下将产生预期的结果:
spark_output <- spark_session(sc) %>%
sparklyr::invoke("sql",
"SELECT userid, collect_list(city)
OVER (PARTITION BY userid
ORDER BY date
ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
AS cities
FROM my_sdf") %>%
sdf_register() %>%
group_by(userid) %>%
filter(row_number(userid) == 1) %>%
ungroup() %>%
mutate(cities = paste(cities, sep = " > ")) %>%
sdf_register()
答案 1 :(得分:-1)
好:所以我承认以下解决方案根本没有效率(它使用for循环,实际上是很多代码,看起来似乎很简单,但实际上),但是我认为这应该可行:< / p>
#install.packages("tidyverse") # if needed
library(tidyverse)
df <- tribble(
~userid, ~date, ~city,
1 , "2018-08-02" , "A",
1 , "2018-08-03" , "B",
1 , "2018-08-04" , "C",
2 , "2018-08-17" , "G",
2 , "2018-08-20" , "E",
2 , "2018-08-23" , "F"
)
cityPerId <- df %>%
spread(key = date, value = city)
toMutate <- NA
for (i in 1:nrow(cityPerId)) {
cities <- cityPerId[i,][2:ncol(cityPerId)] %>% t() %>%
as.vector() %>%
na.omit()
collapsedCities <- paste(cities, collapse = ",")
toMutate <- c(toMutate, collapsedCities)
}
toMutate <- toMutate[2:length(toMutate)]
final <- cityPerId %>%
mutate(cities = toMutate) %>%
select(userid, cities)