我有以下应用程序,它通过MongoDB Spark Connector使用与MongoDB的连接。我的代码崩溃,因为执行者的SparkContext为null。基本上我正在读取MongoDB中的数据,处理这些数据会导致需要发送到MongoDB的其他查询。最后一步是保存这些附加查询的数据。我使用的代码:
JavaMongoRDD<Document> rdd = MongoSpark.load(sc);
JavaMongoRDD<Document> aggregatedRdd = rdd.withPipeline(...);
JavaPairRDD<String, Document> pairRdd = aggregatedRdd
.mapToPair((document) -> new Tuple2(document.get("_id"), document));
JavaPairRDD<String, List<Document>> mergedRdd = pairRdd.aggregateByKey(new LinkedList<Document>(),
combineFunction, mergeFunction);
JavaRDD<Tuple2<String, List<Tuple2<Date, Date>>>> dateRdd = mergedRdd.map(...);
//at this point dateRdd contains key/value pairs of:
//Key: a MongoDB document ID (String)
//Value: List of Tuple<Date, Date> which are date ranges (start time and end time).
//For each of that date ranges I want to retrieve the data out of MongoDB
//and, for now, I just want to save that data
dateRdd.foreachPartition(new VoidFunction<Iterator<Tuple2<String, List<Tuple2<Date, Date>>>>>() {
@Override
public void call(Iterator<Tuple2<String, List<Tuple2<Date, Date>>>> partitionIterator) throws Exception {
for (; partitionIterator.hasNext(); ) {
Tuple2<String, List<Tuple2<Date, Date>>> tuple = partitionIterator.next();
String fileName = tuple._1;
List<Tuple2<Date, Date>> dateRanges = tuple._2;
for (Tuple2<Date, Date> dateRange : dateRanges) {
Date startDate = dateRange._1;
Date endDate = dateRange._2;
Document aggregationDoc = Document.parse("{ $match: { ts: {$lt: new Date(" + startDate.getTime()
+ "), $gt: new Date(" + endDate.getTime() + ")}, root_document: \"" + fileName
+ "\", signals: { $elemMatch: { signal: \"SomeValue\" } } } }");
//this call will use the initial MongoSpark rdd with the aggregation pipeline that just got created.
//this will get sent to MongoDB
JavaMongoRDD<Document> filteredSignalRdd = rdd.withPipeline(Arrays.asList(aggregationDoc));
String outputFileName = String.format("output_data_%s_%d-%d", fileName,
startDate.getTime(), endDate.getTime());
filteredSignalRdd.saveAsTextFile(outputFileName);
}
}
}
});
我得到的例外是:
Job aborted due to stage failure: Task 23 in stage 2.0 failed 4 times, most recent failure: Lost task 23.3 in stage 2.0 (TID 501, hadoopb24): java.lang.IllegalArgumentException: requirement failed: RDD transformation requires a non-null SparkContext.
Unfortunately SparkContext in this MongoRDD is null.
This can happen after MongoRDD has been deserialized.
SparkContext is not Serializable, therefore it deserializes to null.
RDD transformations are not allowed inside lambdas used in other RDD transformations.
at scala.Predef$.require(Predef.scala:233)
at com.mongodb.spark.rdd.MongoRDD.checkSparkContext(MongoRDD.scala:170)
at com.mongodb.spark.rdd.MongoRDD.copy(MongoRDD.scala:126)
at com.mongodb.spark.rdd.MongoRDD.withPipeline(MongoRDD.scala:116)
at com.mongodb.spark.rdd.api.java.JavaMongoRDD.withPipeline(JavaMongoRDD.scala:46)
这里有什么问题,如何实现这种“嵌套”,新同步的异步创建?
如何在执行程序中访问MongoSpark“上下文”? MongoSpark库需要访问SparkContext,这在执行程序中不可用。
我是否需要再次将所有数据提供给驱动程序,然后让驱动程序发送对MongoSpark“上下文”的新调用?我可以看到这可能如何工作,但这需要异步完成,即每当分区完成处理数据并准备好<String, Tuple<Date,Date>>
时,将其推送给驱动程序并让他开始新的查询。如何才能做到这一点?
答案 0 :(得分:3)
这是预期的,不会发生变化。 Spark不支持:
在这种情况下,您可以使用标准的Mongo客户端。