library(sparklyr)
library(dplyr)
library(Lahman)
spark_install(version = "2.0.0")
sc <- spark_connect(master = "local")
batting_tbl <- copy_to(sc, Lahman::Batting, "batting"); batting_tbl
batting_tbl %>% arrange(-index())
# Error: org.apache.spark.sql.AnalysisException: Undefined function: 'INDEX'.
# This function is neither a registered temporary
# function nor a permanent function registered in the database 'default'.; line 3 pos 10
任何人都知道如何使用dplyr按索引排序Spark(sparklyr)DataFrame?
答案 0 :(得分:0)
这是我能想到的最佳解决方案。虽然正确,sdf_with_unique_id
函数会返回62,000行以上的一些非常高的序列值。无论如何,这是使用SparklyR创建分布式索引列的一种方法。
library(sparklyr)
library(dplyr)
library(Lahman)
options(tibble.width = Inf)
options(dplyr.print_max = Inf)
spark_install(version = "2.0.0")
sc <- spark_connect(master = "local")
batting_tbl <- copy_to(sc, Lahman::Batting, "batting"); batting_tbl
tbl_uncache(sc, "batting")
y <- Lahman::Batting
batting_tbl <- batting_tbl %>% sdf_with_unique_id(., id = "id") # Note 62300 threshold for higher values
batting_tbl %>% arrange(-id)