在SPARK-REPL中编译时出现Scala Double错误

时间:2014-12-09 04:37:46

标签: scala double

我正在学习Scala

   <pre>
     scala> val sample = similarities.filter(m => {
     |       val movies = m._1
     |       (movieNames(movies._1).contains("Star Wars (1977)"))
     |     })
     </pre>

示例:org.apache.spark.rdd.RDD [((Int,Int),Double)] = FilteredRDD [25] at filter at:36

样本编译得很好

但是当我尝试在下一个命令中再次调用样本时

<pre>
scala> val result = sample.map(v => {
     |       val m1 = v._1._1
     |       val m2 = v._1._2
     |       val correl = v._2._1
     |       //val rcorr = v._2._2
     |      // val cos = v._2._3
     |       //val j = v._2._4
     |       (movieNames(m1), movieNames(m2), correl)
     |     })
<console>:41: error: value _1 is not a member of Double
             val correl = v._2._1
</pre>

有人可以帮帮我。谢谢你提前

2 个答案:

答案 0 :(得分:1)

val correl = v._2._1 

应该只是

val correl = v._2 

因为它是元组((Int, Int), Double)

中第二个元素的一部分

答案 1 :(得分:1)

考虑到组合元组的索引量,请考虑将((Int, Int), Double)包装到案例类上,并按如下方式定义隐含,

case class Movie(m1: Int, m2: Int, correl: Double)

implicit def RichMovie(v: ((Int,Int),Double) ) = Movie(v._1._1, v._1._2, v._2)

因此,给定一个组合元组的实例

scala> val m = ( (1,2), 3.5)
m: ((Int, Int), Double) = ((1,2),3.5)

我们可以按如下方式访问其成员,

scala> m.m1
res0: Int = 1

scala> m.m2
res1: Int = 2

scala> m.correl
res2: Double = 3.5