用于编写Tensorflow TFRecords数据文件的纯Java / Scala代码

时间:2016-01-10 21:58:55

标签: java scala apache-spark guava tensorflow

我正在尝试编写Tensorflow RecordWriter类的纯Java / Scala实现,以便将Spark DataFrame转换为TFRecords文件。根据文档,在TFRecords中,每条记录的格式如下:

uint64 length
uint32 masked_crc32_of_length
byte   data[length]
uint32 masked_crc32_of_data

和CRC掩码

masked_crc = ((crc >> 15) | (crc << 17)) + 0xa282ead8ul

目前,我使用以下代码使用guava实现计算CRC:

import com.google.common.hash.Hashing

object CRC32 {
  val kMaskDelta = 0xa282ead8

  def hash(in: Array[Byte]): Int = {
    val hashing = Hashing.crc32c()
    hashing.hashBytes(in).asInt()
  }

  def mask(crc: Int): Int ={
    ((crc >> 15) | (crc << 17)) + kMaskDelta
  }
}

我的其余代码是:

数据编码部分使用以下代码完成:

  object LittleEndianEncoding {
   def encodeLong(in: Long): Array[Byte] = {
    val baos = new ByteArrayOutputStream()
    val out = new LittleEndianDataOutputStream(baos)
    out.writeLong(in)
    baos.toByteArray
  }

  def encodeInt(in: Int): Array[Byte] = {
    val baos = new ByteArrayOutputStream()
    val out = new LittleEndianDataOutputStream(baos)

    out.writeInt(in)
    baos.toByteArray
  }
}

使用协议缓冲区生成记录:

import com.google.protobuf.ByteString
import org.tensorflow.example._

import collection.JavaConversions._
import collection.mutable._

object TFRecord {

  def int64Feature(in: Long): Feature = {

    val valueBuilder = Int64List.newBuilder()
    valueBuilder.addValue(in)

    Feature.newBuilder()
      .setInt64List(valueBuilder.build())
      .build()
  }


  def floatFeature(in: Float): Feature = {
    val valueBuilder = FloatList.newBuilder()
    valueBuilder.addValue(in)
    Feature.newBuilder()
      .setFloatList(valueBuilder.build())
      .build()
  }

  def floatVectorFeature(in: Array[Float]): Feature = {
    val valueBuilder = FloatList.newBuilder()
    in.foreach(valueBuilder.addValue)

    Feature.newBuilder()
      .setFloatList(valueBuilder.build())
      .build()
  }

  def bytesFeature(in: Array[Byte]): Feature = {
    val valueBuilder = BytesList.newBuilder()
    valueBuilder.addValue(ByteString.copyFrom(in))
    Feature.newBuilder()
      .setBytesList(valueBuilder.build())
      .build()
  }

  def makeFeatures(features: HashMap[String, Feature]): Features = {
    Features.newBuilder().putAllFeature(features).build()
  }


  def makeExample(features: Features): Example = {
    Example.newBuilder().setFeatures(features).build()
  }

}

以下是我如何一起使用以生成TFRecords文件的示例:

val label = TFRecord.int64Feature(1)
val feature = TFRecord.floatVectorFeature(Array[Float](1, 2, 3, 4))
val features = TFRecord.makeFeatures(HashMap[String, Feature]  ("feature"->feature, "label"-> label))
val ex = TFRecord.makeExample(features)
val exSerialized = ex.toByteArray()
val length = LittleEndianEncoding.encodeLong(exSerialized.length)
val crcLength =  LittleEndianEncoding.encodeInt(CRC32.mask(CRC32.hash(length)))
val crcEx = LittleEndianEncoding.encodeInt(CRC32.mask(CRC32.hash(exSerialized)))

val out = new FileOutputStream(new File("test.tfrecords"))
out.write(length)
out.write(crcLength)
out.write(exSerialized)
out.write(crcEx)
out.close()

当我尝试使用TFRecordReader读取Tensorflow内部的文件时,出现以下错误:

W tensorflow/core/common_runtime/executor.cc:1076] 0x24cc430 Compute status: Data loss: corrupted record at 0

我怀疑CRC掩码计算不正确或字节序 java和c ++之间生成的文件不一样。

2 个答案:

答案 0 :(得分:6)

我的实现的问题是CRC掩码的计算。以下是我找到的修复:

def mask(crc: Int): Int ={
    ((crc >>> 15) | (crc << 17)) + kMaskDelta
}

关键是使用无符号移位按位运算符>>>而不是>>

答案 1 :(得分:3)

FWIW,Tensorflow团队提供了用于读/写TFRecords的实用程序代码,可以是found in the ecosystem repo