我想使用HIPI来处理Spark上的Image,所以我使用hadoopfile来创建RDD。
val conf = new SparkConf().setAppName("BundleTest")
val sc = new SparkContext(conf)
val bundle0 = sc.hadoopFile[HipiImageHeader,FloatImage,HibInputFormat]("hdfs://192.168.199.11:8020/Hdfs/Image/image.hib.dat",1000)
但是我收到了错误。
Error:(39, 22) type arguments [org.hipi.image.HipiImageHeader,org.hipi.image.FloatImage,org.hipi.imagebundle.mapreduce.HibInputFormat] conform to the bounds of none of the overloaded alternatives of value hadoopFile: [K, V, F <: org.apache.hadoop.mapred.InputFormat[K,V]](path: String)(implicit km: scala.reflect.ClassTag[K], implicit vm: scala.reflect.ClassTag[V], implicit fm: scala.reflect.ClassTag[F])org.apache.spark.rdd.RDD[(K, V)] <and> [K, V, F <: org.apache.hadoop.mapred.InputFormat[K,V]](path: String, minPartitions: Int)(implicit km: scala.reflect.ClassTag[K], implicit vm: scala.reflect.ClassTag[V], implicit fm: scala.reflect.ClassTag[F])org.apache.spark.rdd.RDD[(K, V)]
val bundle0 = sc.hadoopFile[HipiImageHeader,FloatImage,HibInputFormat]("hdfs://192.168.199.11:8020/Hdfs/Image/image.hib",1000)
^
请给我一些建议来解决这个错误。谢谢你的帮助。
答案 0 :(得分:1)
HibInputFormat
扩展FileInputFormat[HipiImageHeader,HipiImage]
,而非FileInputFormat[HipiImageHeader,FloatImage]
。所以hadoopFile[HipiImageHeader,HipiImage,HibInputFormat]
应该有用。