为什么我在scala Spark中出现类型不匹配?

时间:2018-05-21 09:39:34

标签: scala apache-spark apache-spark-mllib

首先,我读取一个文本文件并将其转换为RDD [(String,(String,Float))]:

val data = sc.textFile(dataInputPath);
val dataRDD:RDD[(String,(String,Float))] = data.map{f=> {
      val temp=f.split("//x01");
      (temp(0),(temp(1),temp(2).toInt ) );
      }
    } ;

然后,我运行以下代码将我的数据转换为评级类型

import org.apache.spark.mllib.recommendation.Rating
val imeiMap = dataRDD.reduceByKey((s1,s2)=>s1).collect().zipWithIndex.toMap;
val docidMap = dataRDD.map( f=>(f._2._1,1)).reduceByKey((s1,s2)=>s1).collect().zipWithIndex.toMap;
val ratings = dataRDD.map{case (imei, (doc_id,rating))=> Rating(imeiMap(imei),docidMap(doc_id),rating)};

但我收到了一个错误:

Error:(32, 77) type mismatch;
 found   : String
 required: (String, (String, Float))
    val ratings = dataRDD.map{case (imei, (doc_id,rating))=> Rating(imeiMap(imei),docidMap(doc_id),rating)};

为什么会这样?我认为string已经更改为(String, (String, Float))

2 个答案:

答案 0 :(得分:2)

与您的dataRDD不同,它与imeiMap

有关
imeiMap: scala.collection.immutable.Map[(String, (String, Float)),Int] 

答案 1 :(得分:2)

docidMap的关键不是String,是Tuple(String,Int)

这是因为你在@OneToOne(cascade=CascadeType.ALL) @JoinColumn(name = "MEMBER_ID") private Address address; 方法之前有了zipWithIndex:

将此rdd作为快速测试的输入:

.toMap

您的(String1,( String2,32.0)) (String1,( String2,35.0)) scala> val docidMap = dataRDD.map( f=>(f._2._1,1)).reduceByKey((s1,s2)=>s1).collect().zipWithIndex.toMap; docidMap: scala.collection.immutable.Map[(String, Int),Int] = Map((" String2",1) -> 0) val docidMap = dataRDD.map( f=>(f._2._1,1)).reduceByKey((s1,s2)=>s1).collect().toMap; docidMap: scala.collection.immutable.Map[String,Int] = Map(" String2" -> 1) 也会发生同样的情况,您似乎只需要从那里删除imeiMap

zipWithIndex