Spark scala任务不能为闭包

时间:2016-05-17 20:09:34

标签: scala serialization apache-spark rdd

我有一个RDD的行,我想根据一个闭包进行过滤。最后我想把闭包作为一个参数传递给我正在做过滤器的方法,但我已经简化了它,我可以用这样简单的东西重现错误。

def fn(l: Long): Boolean = true
rdd.filter{ row => fn(row.getAs[Long]("field")) }

我尝试将fn放入一个case对象,一个扩展可序列化特征的对象,在方法调用filter的内部和外部定义fn。我想弄清楚我需要做什么而不会出现这些错误。我知道有关堆栈溢出的问题已经有很多问题,我一直在寻找合适的答案,但我找不到它。

Name: org.apache.spark.SparkException
Message: Task not serializable
StackTrace: org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304)
org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294)
org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122)
org.apache.spark.SparkContext.clean(SparkContext.scala:2058)
org.apache.spark.rdd.RDD$$anonfun$filter$1.apply(RDD.scala:341)
org.apache.spark.rdd.RDD$$anonfun$filter$1.apply(RDD.scala:340)
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
org.apache.spark.rdd.RDD.filter(RDD.scala:340)
$line131.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:43)
$line131.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:48)
$line131.$read$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:50)
$line131.$read$$iwC$$iwC$$iwC$$iwC.<init>(<console>:52)
$line131.$read$$iwC$$iwC$$iwC.<init>(<console>:54)
$line131.$read$$iwC$$iwC.<init>(<console>:56)
$line131.$read$$iwC.<init>(<console>:58)
$line131.$read.<init>(<console>:60)
$line131.$read$.<init>(<console>:64)
$line131.$read$.<clinit>(<console>)
$line131.$eval$.<init>(<console>:7)
$line131.$eval$.<clinit>(<console>)
$line131.$eval.$print(<console>)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:601)
org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
org.apache.toree.kernel.interpreter.scala.ScalaInterpreter$$anonfun$interpretAddTask$1$$anonfun$apply$3.apply(ScalaInterpreter.scala:356)
org.apache.toree.kernel.interpreter.scala.ScalaInterpreter$$anonfun$interpretAddTask$1$$anonfun$apply$3.apply(ScalaInterpreter.scala:351)
org.apache.toree.global.StreamState$.withStreams(StreamState.scala:81)
org.apache.toree.kernel.interpreter.scala.ScalaInterpreter$$anonfun$interpretAddTask$1.apply(ScalaInterpreter.scala:350)
org.apache.toree.kernel.interpreter.scala.ScalaInterpreter$$anonfun$interpretAddTask$1.apply(ScalaInterpreter.scala:350)
org.apache.toree.utils.TaskManager$$anonfun$add$2$$anon$1.run(TaskManager.scala:140)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:722)

更新:

一个更完整的例子。我正在用Toree运行Jupyter并从我的单元格中的jar文件中执行代码。以下是我尝试失败的三件事

import org.apache.spark.SparkContext
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.{Row, SQLContext}

class NotWorking1(sc: SparkContext, sqlContext: SQLContext, fn: Long=>Boolean) {
  def myFilterer(rdd:RDD[Row], longField: String): RDD[Row] = rdd.filter{ row => fn(row.getAs[Long](longField)) }
}

object NotWorking1 {
  def apply(sc: SparkContext, sqlContext: SQLContext) = {
    def myFn(l: Long): Boolean = true
    new NotWorking1(sc, sqlContext, myFn)
  }
}

class NotWorking2(sc: SparkContext, sqlContext: SQLContext) {
  def myFn(l: Long): Boolean = true

  def myFilterer(rdd:RDD[Row], longField: String): RDD[Row] = {
    rdd.filter{ row => myFn(row.getAs[Long](longField)) }
  }
}

object NotWorking2 {
  def apply(sc: SparkContext, sqlContext: SQLContext) = {
    new NotWorking2(sc, sqlContext)
  }
}

class NotWorking3(sc: SparkContext, sqlContext: SQLContext) {
  def myFilterer(rdd:RDD[Row], longField: String): RDD[Row] = {
    def myFn(l: Long): Boolean = true
    rdd.filter{ row => myFn(row.getAs[Long](longField)) }
  }
}

object NotWorking3 {
  def apply(sc: SparkContext, sqlContext: SQLContext) = {
    new NotWorking3(sc, sqlContext)
  }
}

来自Jupyter单元格,我导入相应的类并运行

val nw1 = NotWorking1(sc, sqlContext)
val nw2 = NotWorking2(sc, sqlContext)
val nw3 = NotWorking3(sc, sqlContext)
nw1.myFilterer(rdd, "field")
nw2.myFilterer(rdd, "field")
nw3.myFilterer(rdd, "field")

这三个都失败了。 NotWorking3特别令人惊讶。它是我能做的任何事情来隔离函数而不是尝试序列化整个对象(我相信这会让我陷入麻烦,因为我保留了对spark和sql上下文的引用)

2 个答案:

答案 0 :(得分:1)

根据我的经验,最简单的方法是使用函数而不是方法,如果您希望它们可序列化的话。换句话说,如果您希望将您的代码片段运送到执行程序,请使用val定义它们,而不是def。

在您的示例中,在类NotWorking3中,更改myFn,如下所示,它将起作用:

val myFn = (l: Long) => true

<强>更新

对于NotWorking1和2,以及使用val而不是def,您还需要扩展Serializable特征并使用@SerialVersionUID注释。以下是您的示例的工作版本(这里和那里略有变化):

import org.apache.spark.SparkContext
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.{Row, SQLContext}

@SerialVersionUID(100L)
class Working1(sc: SparkContext, sqlContext: SQLContext, fn: Long=>Boolean) extends Serializable{
  def myFilterer(rdd:RDD[Row]): RDD[Row] = rdd.filter{ row => fn(row.getAs[Long](0)) }
}

@SerialVersionUID(101L)
class Working2 (sc: SparkContext, sqlContext: SQLContext) extends Serializable{
  val myFn = (l: Long) => true

  def myFilterer(rdd:RDD[Row]): RDD[Row] = {
    rdd.filter{ row => myFn(row.getAs[Long](0)) }
  }
}

class Working3 (sc: SparkContext, sqlContext: SQLContext) {
  def myFilterer(rdd:RDD[Row]): RDD[Row] = {
    val myFn = (l: Long) => true
    rdd.filter{ row => myFn(row.getAs[Long](0)) }
  }
}

val myFnGlobal = (l: Long) => true
val r1 = sc.parallelize(List(1L,2L,3L,4L,5L,6L,7L)).map(x => Row(x))

val w1 = new Working1(sc, sqlContext, myFnGlobal)
val w2 = new Working2(sc, sqlContext)
val w3 = new Working3(sc, sqlContext)
w1.myFilterer(r1).collect
w2.myFilterer(r1).collect
w3.myFilterer(r1).collect

答案 1 :(得分:0)

来自@JustinPihony的答案是正确的:Jupyter将动态创建一个包含您在其会话中键入的代码的类,然后代表您将其提交给spark。您创建的fn需要包含该封闭类。

您可能需要将自定义逻辑jar升级到用户定义的jar文件中,并将其包含在jupyter类路径中。添加到类路径的过程将取决于您正在使用的jupyter内核。