为什么Writable的隐式转换不起作用

时间:2017-07-03 08:53:12

标签: scala hadoop apache-spark rdd

SparkContext定义了Writable与其原始类型之间的几个隐式转换,例如LongWritable <-> LongText <-> String

  • TEST CASE 1:

我使用以下代码组合小文件

  @Test
  def  testCombineSmallFiles(): Unit = {
    val path = "file:///d:/logs"
    val rdd = sc.newAPIHadoopFile[LongWritable,Text, CombineTextInputFormat](path)
    println(s"rdd partition number is ${rdd.partitions.length}")
    println(s"lines is :${rdd.count()}")
  }

上面的代码运行良好,但是如果我使用以下行来获取rdd,则会导致编译错误:

val rdd = sc.newAPIHadoopFile[Long,String, CombineTextInputFormat](path)

看起来隐式转换没有生效。我想知道这里有什么不对,为什么它不起作用。

  • TEST CASE 2:

使用以下使用sequenceFile的代码,隐式转换看起来有效(Text转换为String,IntWritable转换为Int)

 @Test
  def testReadWriteSequenceFile(): Unit = {
    val data = List(("A", 1), ("B", 2), ("C", 3))
    val outputDir = Utils.getOutputDir()
    sc.parallelize(data).saveAsSequenceFile(outputDir)
    //implicit conversion works for the SparkContext#sequenceFile method
    val rdd = sc.sequenceFile(outputDir + "/part-00000", classOf[String], classOf[Int])
    rdd.foreach(println)
  }

比较这两个测试用例,我没有看到使一个工作成功的关键区别,而另一个不起作用。

  • 注意:

我在TEST CASE 2中使用的SparkContext#sequenceFile方法是:

  def sequenceFile[K, V](
      path: String,
      keyClass: Class[K],
      valueClass: Class[V]): RDD[(K, V)] = withScope {
    assertNotStopped()
    sequenceFile(path, keyClass, valueClass, defaultMinPartitions)
  }

sequenceFile方法中,它调用另一个sequenceFile方法,它调用hadoopFile方法来读取数据

  def sequenceFile[K, V](path: String,
      keyClass: Class[K],
      valueClass: Class[V],
      minPartitions: Int
      ): RDD[(K, V)] = withScope {
    assertNotStopped()
    val inputFormatClass = classOf[SequenceFileInputFormat[K, V]]
    hadoopFile(path, inputFormatClass, keyClass, valueClass, minPartitions)
  }

1 个答案:

答案 0 :(得分:3)

需要使用隐式转换WritableConverter。 例如:

   def sequenceFile[K, V]
       (path: String, minPartitions: Int = defaultMinPartitions)
       (implicit km: ClassTag[K], vm: ClassTag[V],
        kcf: () => WritableConverter[K], vcf: () => WritableConverter[V]): RDD[(K, V)] = {...}

我无法在https://stackoverflow.com/a/28810123 sc.newAPIHadoopFile中找到任何地方。所以它不可能。

此外,请确认您已使用import SparkContext._(我无法在您的帖子中看到导入内容)

PLS。查看docWritableConverters的代码

/**
 * A class encapsulating how to convert some type `T` from `Writable`. It stores both the `Writable`
 * class corresponding to `T` (e.g. `IntWritable` for `Int`) and a function for doing the
 * conversion.
 * The getter for the writable class takes a `ClassTag[T]` in case this is a generic object
 * that doesn't know the type of `T` when it is created. This sounds strange but is necessary to
 * support converting subclasses of `Writable` to themselves (`writableWritableConverter()`).
 */
private[spark] class WritableConverter[T](
    val writableClass: ClassTag[T] => Class[_ <: Writable],
    val convert: Writable => T)
  extends Serializable

object WritableConverter {

  // Helper objects for converting common types to Writable
  private[spark] def simpleWritableConverter[T, W <: Writable: ClassTag](convert: W => T)
  : WritableConverter[T] = {
    val wClass = classTag[W].runtimeClass.asInstanceOf[Class[W]]
    new WritableConverter[T](_ => wClass, x => convert(x.asInstanceOf[W]))
  }

  // The following implicit functions were in SparkContext before 1.3 and users had to
  // `import SparkContext._` to enable them. Now we move them here to make the compiler find
  // them automatically. However, we still keep the old functions in SparkContext for backward
  // compatibility and forward to the following functions directly.

  implicit def intWritableConverter(): WritableConverter[Int] =
    simpleWritableConverter[Int, IntWritable](_.get)

  implicit def longWritableConverter(): WritableConverter[Long] =
    simpleWritableConverter[Long, LongWritable](_.get)

  implicit def doubleWritableConverter(): WritableConverter[Double] =
    simpleWritableConverter[Double, DoubleWritable](_.get)

  implicit def floatWritableConverter(): WritableConverter[Float] =
    simpleWritableConverter[Float, FloatWritable](_.get)

  implicit def booleanWritableConverter(): WritableConverter[Boolean] =
    simpleWritableConverter[Boolean, BooleanWritable](_.get)

  implicit def bytesWritableConverter(): WritableConverter[Array[Byte]] = {
    simpleWritableConverter[Array[Byte], BytesWritable] { bw =>
      // getBytes method returns array which is longer then data to be returned
      Arrays.copyOfRange(bw.getBytes, 0, bw.getLength)
    }
  }

  implicit def stringWritableConverter(): WritableConverter[String] =
    simpleWritableConverter[String, Text](_.toString)

  implicit def writableWritableConverter[T <: Writable](): WritableConverter[T] =
    new WritableConverter[T](_.runtimeClass.asInstanceOf[Class[T]], _.asInstanceOf[T])
}

编辑:

  

我已经更新了我的问题,并给出了两个测试用例,一个是有效的   其他没有,但我无法弄清楚它们之间的区别   他们。

隐式转换需要

WritableConverter

  • Testcase1,val rdd = sc.newAPIHadoopFile...(path)隐式转换在SparkContext NOT 完成。这就是为什么如果你通过Long它不会工作,会给编译器错误

  • TestCase2即val rdd = sc.sequenceFile(...)您直接传递ClassOf[...] 如果您要传递ClassOf[...],则不需要隐式转换,因为这些类不是Long值或String Value ..