我试图迭代JavaPairRDD并使用JavaPairRDD的键和值执行一些计算。然后将每个JavaPair的结果输出到 processedData 列表。
我已尝试过的内容: make变量,我在lambda函数static中使用。 make方法,我从lambda foreach循环静态调用。 添加工具Serializable
这是我的代码:
List<String> processedData = new ArrayList<>();
JavaPairRDD<WebLabGroupObject, Iterable<WebLabPurchasesDataObject>> groupedByWebLabData.foreach(data ->{
JavaRDD<WebLabPurchasesDataObject> oneGroupOfData = convertIterableToJavaRdd(data._2());
double opsForOneGroup = getOpsForGroup(oneGroupOfData);
double unitsForOneGroup = getUnitsForGroup(oneGroupOfData);
String combinedOutputForOneGroup = data._1().getProductGroup() + "," + opsForOneGroup + "," + unitsForOneGroup;
processedData.add(combinedOutputForOneGroup);
});
private JavaRDD<WebLabPurchasesDataObject> convertIterableToJavaRdd(Iterable<WebLabPurchasesDataObject> groupedElements)
{
List<WebLabPurchasesDataObject> list = new ArrayList<>();
groupedElements.forEach(el -> list.add(el));
return this.context.parallelize(list);
}
&#13;
这是例外:
Exception in thread "main" org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:166)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:158)
at org.apache.spark.SparkContext.clean(SparkContext.scala:1623)
at org.apache.spark.rdd.RDD.foreach(RDD.scala:797)
at org.apache.spark.api.java.JavaRDDLike$class.foreach(JavaRDDLike.scala:312)
at org.apache.spark.api.java.AbstractJavaRDDLike.foreach(JavaRDDLike.scala:46)
at com.amazon.videoads.emr.spark.WebLabDataAnalyzer.processWebLabData(WebLabDataAnalyzer.java:121)
at com.amazon.videoads.emr.spark.WebLabMetricsApplication.main(WebLabMetricsApplication.java:110)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala).Caused by: java.io.NotSerializableException: org.apache.spark.api.java.JavaSparkContext . Serialization stack:
- object not serializable (class: org.apache.spark.api.java.JavaSparkContext, value: org.apache.spark.api.java.JavaSparkContext@395e9596)
- element of array (index: 0)
- array (class [Ljava.lang.Object;, size 2)
- field (class: java.lang.invoke.SerializedLambda, name: capturedArgs, type: class [Ljava.lang.Object;)
- object (class com.amazon.videoads.emr.spark.WebLabDataAnalyzer$$Lambda$14/1536342848, com.amazon.videoads.emr.spark.WebLabDataAnalyzer$$Lambda$14/1536342848@5acc8c7c)
- field (class: org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1, name: f$14, type: interface org.apache.spark.api.java.function.VoidFunction)
- object (class org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1, <function1>)
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:38)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:80)
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:164)
... 16 more
答案 0 :(得分:1)
TL; DR :您正在尝试在 groupsByWebLabData RDD中使用 JavaSparkContext :您无法做到因为JavaSparkContext不可序列化。
堆栈跟踪在这里非常有用:
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala).Caused by: java.io.NotSerializableException: org.apache.spark.api.java.JavaSparkContext . Serialization stack:
这意味着
这是由这两行引起的:
JavaPairRDD<WebLabGroupObject, Iterable<WebLabPurchasesDataObject>> groupedByWebLabData.foreach(data ->{
JavaRDD<WebLabPurchasesDataObject> oneGroupOfData = convertIterableToJavaRdd(data._2());
,因为
convertIterableToJavaRdd
由RDD的每个元素调用,使用
this.context.parallelize(list)
即。它使用 JavaSparkContext :您尝试在执行程序上使用JavaSparkContext(其中数据使您的 groupedByWebLabData RDD存在)。好吧,你不能这样做,因为JavaSparkContext不可序列化。
您可以通过 UDF 来完成您的工作,并且您可以收集结果(如果它不是太大)。