Sparklyr中拆分应用合并策略的错误处理

时间:2019-06-03 17:40:46

标签: r purrr sparklyr

我有一个Spark DataFrame,其ID列名为“ userid”,正在使用sparklyr进行操作。每个userid可以具有从一行数据到数百行数据的任意位置。我正在对每个userid组应用一个函数,该函数根据某些事件条件压缩其包含的行数。像

sdf %>%
  group_by(userid) %>%
  ... %>%   # using dplyr::filter and dplyr::mutate
  ungroup()

我想将此函数包装在一个错误处理程序中,例如purrr::possibly,以便在单个组中发生错误时也不会中断计算。

到目前为止,我使用replyr软件包获得了最大的成功。具体来说,replyr::gapply“分组列中按值划分的分区,将通用转换应用于每个组,然后将各组绑定在一起。”有两种分区数据的方法:“ group_by”和“ extract”。作者仅建议在组数为100个或更少的情况下使用“提取”,但是“ group_by”方法不能按我期望的那样工作:

library(sparklyr)
library(dplyr) 
library(replyr)   # replyr::gapply
library(purrr)    # purrr::possibly

sc <- spark_connect(master = "local")

# Create a test data frame to use gapply on.
test_spark <- tibble(
  userid = c(1, 1, 2, 2, 3, 3),
  occurred_at = seq(1, 6)
) %>%
  sdf_copy_to(sc, ., "test_spark")

# Create a data frame that purrr::possibly should return in case of error.
default_spark <- tibble(userid = -1, max = -1, min = -1) %>%
  sdf_copy_to(sc, ., "default_spark")

#####################################################
# Method 1: gapply with partitionMethod = "group_by".
#####################################################

# Create a function which may throw an error. The group column, userid, is not 
# included since gapply( , partitionMethod = "group_by") creates it.
# - A print statement is included to show that when gapply uses "group_by", the 
# function is only called once.

fun_for_groups <- function(sdf) {
  temp <- sample(c(1,2), 1)
  print(temp)
  if (temp == 2) {
    log("a")
  } else {
    sdf %>%
      summarise(max = max(occurred_at),
                min = min(occurred_at))
  }
}

# Wrap the risk function to try and handle the error gracefully.

safe_for_groups <- purrr::possibly(fun_for_groups, otherwise = default_spark)

# Apply the safe function to each userid using gapply and "group_by".
# - The result is either a) only the default_spark data frame.
#                        b) the result expected if no error occurs in fun_for_groups.
#   I would expect the answer to have a mixture of default_spark rows and correct rows.

replyr::gapply(
  test_spark, 
  gcolumn = "userid", 
  f = safe_for_groups, 
  partitionMethod = "group_by"
)

#####################################################
# Method 2: gapply with partitionMethod = "extract".
#####################################################

# Create a function which may throw an error. The group column, userid, is 
# included since gapply( , partiionMethod = "extract") doesn't create it.
# - Include a print statement to show that when gapply uses partitionMethod 
#   "split", the function is called for each userid.

fun_for_extract <- function(df) {
  temp <- sample(c(1,2), 1)
  print(temp)
  if (temp == 2) {
    log("a")
  } else {
    df %>%
      summarise(max = max(occurred_at), 
                min = min(occurred_at),
                userid = min(userid))
  }
}

safe_for_extract <- purrr::possibly(fun_for_extract, otherwise = default_spark)

# Apply that function to each userid using gapply and "split".
# - The result dataframe has a mixture of "otherwise" rows and correct rows.

replyr::gapply(
  test_spark, 
  gcolumn = "userid", 
  f = safe_for_extract, 
  partitionMethod = "extract"
)

当分组列具有数百万个值时使用gapply有多糟糕?是否有上述错误处理策略的替代方法?

1 个答案:

答案 0 :(得分:0)

replyr::gapply()只是dplyr(在本例中为sparklyr)之上的薄包装。

对于分组模式-结果只有在没有出现组错误的情况下才是正确的,因为计算是一次全部发出的。这是最有效的模式,但实际上无法实现任何类型的错误处理。

对于提取模式-可能可以添加错误处理,但是当前代码没有它。

作为replyr的作者,我实际上建议研究sparklyr的{​​{3}}方法。当replyrspark_apply()中不可用时(以及在sparklyr中也没有绑定数据列表时),sparklyr的间隙就被设计了。

spark_apply()(为在大型项目中使用过的客户修补问题),对于新项目可能不是一个好选择。