value join不是org.apache.spark.rdd.RDD的成员

时间:2015-03-25 20:12:59

标签: scala apache-spark

我收到此错误:

value join is not a member of 
    org.apache.spark.rdd.RDD[(Long, (Int, (Long, String, Array[_0])))
        forSome { type _0 <: (String, Double) }]

我发现的唯一建议是import org.apache.spark.SparkContext._我已经这样做了。

我做错了什么?

编辑:更改代码以消除forSome(即,当对象具有类型org.apache.spark.rdd.RDD[(Long, (Int, (Long, String, Array[(String, Double)])))时)解决了问题。 这是Spark中的一个错误吗?

2 个答案:

答案 0 :(得分:6)

joinorg.apache.spark.rdd.PairRDDFunctions的成员。那么为什么隐式类不会触发?

scala> val s = Seq[(Long, (Int, (Long, String, Array[_0]))) forSome { type _0 <: (String, Double) }]()
scala> val r = sc.parallelize(s)
scala> r.join(r) // Gives your error message.
scala> val p = new org.apache.spark.rdd.PairRDDFunctions(r)
<console>:25: error: no type parameters for constructor PairRDDFunctions: (self: org.apache.spark.rdd.RDD[(K, V)])(implicit kt: scala.reflect.ClassTag[K], implicit vt: scala.reflect.ClassTag[V], implicit ord: Ordering[K])org.apache.spark.rdd.PairRDDFunctions[K,V] exist so that it can be applied to arguments (org.apache.spark.rdd.RDD[(Long, (Int, (Long, String, Array[_0]))) forSome { type _0 <: (String, Double) }])
 --- because ---
argument expression's type is not compatible with formal parameter type;
 found   : org.apache.spark.rdd.RDD[(Long, (Int, (Long, String, Array[_0]))) forSome { type _0 <: (String, Double) }]
 required: org.apache.spark.rdd.RDD[(?K, ?V)]
Note: (Long, (Int, (Long, String, Array[_0]))) forSome { type _0 <: (String, Double) } >: (?K, ?V), but class RDD is invariant in type T.
You may wish to define T as -T instead. (SLS 4.5)
       val p = new org.apache.spark.rdd.PairRDDFunctions(r)
               ^
<console>:25: error: type mismatch;
 found   : org.apache.spark.rdd.RDD[(Long, (Int, (Long, String, Array[_0]))) forSome { type _0 <: (String, Double) }]
 required: org.apache.spark.rdd.RDD[(K, V)]
       val p = new org.apache.spark.rdd.PairRDDFunctions(r)

我确信错误消息对其他人都很清楚,但只是为了我自己的慢自我,我们试着理解它。 PairRDDFunctions有两个类型参数KV。您的forSome适用于整个货币对,因此无法将其分为单独的KV类型。没有KV RDD[(K, V)]等于您的RDD类型。

但是,您可以让forSome仅适用于密钥,而不是整个密钥。现在加入,因为此类型可以分为KV

scala> val s2 = Seq[(Long, (Int, (Long, String, Array[_0])) forSome { type _0 <: (String, Double) })]()
scala> val r2 = sc.parallelize(2s)
scala> r2.join(r2)
res0: org.apache.spark.rdd.RDD[(Long, ((Int, (Long, String, Array[_0])) forSome { type _0 <: (String, Double) }, (Int, (Long, String, Array[_0])) forSome { type _0 <: (String, Double) }))] = MapPartitionsRDD[5] at join at <console>:26

答案 1 :(得分:0)

考虑将2个Spark RDD结合在一起。.

说,rdd1.first的形式为(Int, Int, Float) = (1,957,299.98)rdd2.first(Int, Int) = (25876,1)类似,应该在两个RDD的第一个字段上进行联接。

  

scala> rdd1.join(rdd2)---导致错误:**:错误:   值联接不是org.apache.spark.rdd.RDD [(Int,Int,   浮动)]

原因


两个RDD均应采用键-值对的形式。

在这里,rdd2(以(1,957,299.98)的形式)不遵循此规则。而rdd1(以(25876,1)的形式)却遵循。

解决方案


将第一个RDD的输出从(1,957,299.98)转换为(1,(957,299.98))形式的键-值对,然后将其与rdd2联接,如下所示:

scala> val rdd1KV = rdd1.map(x=>(x.split(",")(1).toInt,(x.split(",")(2).toInt,x.split(",")(4).toFloat))) -- modified RDD

scala> rdd1KV.join(rdd2) -- join successful :)
res**: (Int, (Int, Float)) = (1,(957,299.98))

顺便说一下,join是org.apache.spark.rdd.PairRDDFunctions的成员。因此,请确保在要运行代码的任何位置将其导入Eclipse或IDE。

我博客上的文章:

https://tips-to-code.blogspot.com/2018/08/apache-spark-error-resolution-value.html