为什么sparklyr :: spark_apply在指定数字模式时失败

时间:2018-02-23 16:26:10

标签: r sparklyr

给定一个火花连接sc

iris_spk <- copy_to(sc, iris)

接下来,我将对spark_apply

采取一个愚蠢的例子
iris_spk %>% 
  spark_apply(
    function(x) {
      data.frame(A=c("a", "b", "c"), B=c(1, 2, 3))
    },
    group_by = "Species",
    columns = c("A", "B"),
    packages = FALSE
  )
# # Source:   table<sparklyr_tmp_3e96258604cd> [?? x 3]
# # Database: spark_connection
#   Species    A         B
#   <chr>      <chr> <dbl>
# 1 versicolor a      1.00
# 2 versicolor b      2.00
# 3 versicolor c      3.00
# 4 virginica  a      1.00
# 5 virginica  b      2.00
# 6 virginica  c      3.00
# 7 setosa     a      1.00
# 8 setosa     b      2.00
# 9 setosa     c      3.00

到目前为止,这么好。但是https://stackoverflow.com/a/46410425/1785752表明我可以通过指定输出模式而不仅仅是输出列名来提高性能。所以我试过了:

iris_spk %>% 
  spark_apply(
    function(x) {
      data.frame(A=c("a", "b", "c"), B=c(1, 2, 3))
    },
    group_by = "Species",
    columns = list(A="character",
                   B="numeric"),
    packages = FALSE
  )

但事情出了问题:

Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 26.0 failed 4 times, most recent failure: Lost task 1.3 in stage 26.0 (TID 133, ml-dn38.mitre.org, executor 3): java.lang.RuntimeException: Error while encoding: java.lang.RuntimeException: java.lang.String is not a valid external type for schema of double 
  if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 0, A), StringType), true) AS A#256 
  if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 1, B), DoubleType) AS B#257 
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:290) 
at org.apache.spark.sql.SparkSession$$anonfun$3.apply(SparkSession.scala:581) 
at org.apache.spark.sql.SparkSession$$anonfun$3.apply(SparkSession.scala:581) 
at scala.collection.Iterator$$anon$11.next(Iterator.scala:409) at scala.collection.Iterator$$anon$11.next(Iterator.scala:409) 
... and so on

我是否错误地指定了架构?

1 个答案:

答案 0 :(得分:2)

啊!我认为group_by列不会从输入数据框继承它的模式,但需要与其余部分一起声明。我刚试过

iris_spk %>% 
  spark_apply(
    function(x) {
      data.frame(A=c("a", "b", "c"), B=c(1, 2, 3))
    },
    group_by = "Species",
    columns = list(Species="character",
                   A="character",
                   B="numeric"),
    packages = FALSE
  )

有效(与上面第一次尝试的结果相同)