这是我在 scala 中使用 toDebugString 时得到的结果:
scala> val a = sc.parallelize(Array(1,2,3)).distinct
a: org.apache.spark.rdd.RDD[Int] = MappedRDD[3] at distinct at <console>:12
scala> a.toDebugString
res0: String =
(4) MappedRDD[3] at distinct at <console>:12
| ShuffledRDD[2] at distinct at <console>:12
+-(4) MappedRDD[1] at distinct at <console>:12
| ParallelCollectionRDD[0] at parallelize at <console>:12
这相当于 python :
>>> a = sc.parallelize([1,2,3]).distinct()
>>> a.toDebugString()
'(4) PythonRDD[6] at RDD at PythonRDD.scala:43\n | MappedRDD[5] at values at NativeMethodAccessorImpl.java:-2\n | ShuffledRDD[4] at partitionBy at NativeMethodAccessorImpl.java:-2\n +-(4) PairwiseRDD[3] at RDD at PythonRDD.scala:261\n | PythonRDD[2] at RDD at PythonRDD.scala:43\n | ParallelCollectionRDD[0] at parallelize at PythonRDD.scala:315'
正如您所看到的,python中的输出不像scala那样好。有没有任何技巧可以更好地输出这个功能?
我正在使用Spark 1.1.0。
答案 0 :(得分:14)
尝试添加print
语句,以便实际打印调试字符串,而不是显示其__repr__
:
>>> a = sc.parallelize([1,2,3]).distinct()
>>> print a.toDebugString()
(8) PythonRDD[27] at RDD at PythonRDD.scala:44 [Serialized 1x Replicated]
| MappedRDD[26] at values at NativeMethodAccessorImpl.java:-2 [Serialized 1x Replicated]
| ShuffledRDD[25] at partitionBy at NativeMethodAccessorImpl.java:-2 [Serialized 1x Replicated]
+-(8) PairwiseRDD[24] at distinct at <stdin>:1 [Serialized 1x Replicated]
| PythonRDD[23] at distinct at <stdin>:1 [Serialized 1x Replicated]
| ParallelCollectionRDD[21] at parallelize at PythonRDD.scala:358 [Serialized 1x Replicated]
答案 1 :(得分:0)
它没有执行,只是缓存 你应该使用:
a = sc.parallelize([1,2,3]).distinct()
a.collect()
[1, 2, 3]