这是一段有效的代码,但在我尝试从不同的Sparksession
scala object
之后突然无法正常工作
val b = a.filter { x => (!x._2._1.isEmpty) && (!x._2._2.isEmpty) }
val primary_ke = b.map(rec => (rec._1.split(",")(0))).distinct
for (i <- primary_key_distinct) {
b.foreach(println)
}
错误:
ERROR Executor: Exception in task 0.0 in stage 5.0 (TID 5)
org.apache.spark.SparkException: This RDD lacks a SparkContext. It could happen in the following cases:
(1) RDD transformations and actions are NOT invoked by the driver, but inside of other transformations; for example, rdd1.map(x => rdd2.values.count() * x) is invalid because the values transformation and count action cannot be performed inside of the rdd1.map transformation. For more information, see SPARK-5063.
(2) When a Spark Streaming job recovers from checkpoint, this exception will be hit if a reference to an RDD not defined by the streaming job is used in DStream operations. For more information, See SPARK-13758.
即使在我撤销它并且我没有使用任何物体之后也无法工作。
代码已更新:
object try {
def main(args: Array[String]) {
val spark = SparkSession.builder().master("local").appName("50columns3nodes").getOrCreate()
var s = spark.read.csv("/home/hadoopuser/Desktop/input/source.csv").rdd.map(_.mkString(","))
var k = spark.read.csv("/home/hadoopuser/Desktop/input/destination.csv").rdd.map(_.mkString(","))
val source_primary_key = s.map(rec => (rec.split(",")(0), rec))
val destination_primary_key = k.map(rec => (rec.split(",")(0), rec))
val a = source_primary_key.cogroup(destination_primary_key).filter { x => ((x._2._1) != (x._2._2)) }
val b = a.filter { x => (!x._2._1.isEmpty) && (!x._2._2.isEmpty) }
var extra_In_Dest = a.filter(x => x._2._1.isEmpty && !x._2._2.isEmpty).map(rec => (rec._2._2.mkString("")))
var extra_In_Src = a.filter(x => !x._2._1.isEmpty && x._2._2.isEmpty).map(rec => (rec._2._1.mkString("")))
val primary_key_distinct = b.map(rec => (rec._1.split(",")(0))).distinct
for (i <- primary_key_distinct) {
var lengthofarray = 0
println(i)
b.foreach(println)
}
}
}
输入数据如下
s=1,david
2,ajay
3,jijo
4,abi
5,surendhar
k=1,david
2,ajay
3,jijoaa
4,abisdsdd
5,surendhar
val a包含{3,(jijo,jijoaa),5(abi,abisdsdd)}
答案 0 :(得分:1)
如果您仔细阅读了第一条消息
(1)驱动程序不会调用RDD转换和操作,而是在其他转换中调用;例如,rdd1.map(x =&gt; rdd2.values.count()* x)无效,因为无法在rdd1.map转换内执行值转换和计数操作。有关更多信息,请参阅SPARK-5063。
它明确指出操作和转换无法在转换中执行。
primary_key_distinct
转换已在b
上完成,而b
本身就是在a
上完成的转换。而b.foreach(println)
是在<{1}}的转换中完成的操作
因此,如果您在驱动程序中收集primary_key_distinct
或b
,那么代码应该正常运行
primary_key_distinct
或强>
val b = a.filter { x => (!x._2._1.isEmpty) && (!x._2._2.isEmpty) }.collect
或如果您未在其他转化中使用操作,那么代码也应该正常运行
val primary_key_distinct = b.map(rec => (rec._1.split(",")(0))).distinct.collect
我希望解释清楚。