使用scalapb解码Spark Streaming中的Proto Buf消息时出错

时间:2016-11-17 07:35:37

标签: scala apache-spark-sql protocol-buffers spark-streaming scalapb

这是一个Spark Streaming应用程序,它使用Proto Buf中编码的Kafka消息。使用scalapb库。我收到以下错误。请帮忙。

> com.google.protobuf.InvalidProtocolBufferException: While parsing a
> protocol message, the input ended unexpectedly in the middle of a
> field.  This could mean either that the input has been truncated or
> that an embedded message misreported its own length.  at
> com.google.protobuf.InvalidProtocolBufferException.truncatedMessage(InvalidProtocolBufferException.java:82)
>   at
> com.google.protobuf.CodedInputStream.skipRawBytesSlowPath(CodedInputStream.java:1284)
>   at
> com.google.protobuf.CodedInputStream.skipRawBytes(CodedInputStream.java:1267)
>   at
> com.google.protobuf.CodedInputStream.skipField(CodedInputStream.java:198)
>   at com.example.protos.demo.Student.mergeFrom(Student.scala:59)  at
> com.example.protos.demo.Student.mergeFrom(Student.scala:11)   at
> com.trueaccord.scalapb.LiteParser$.parseFrom(LiteParser.scala:9)  at
> com.trueaccord.scalapb.GeneratedMessageCompanion$class.parseFrom(GeneratedMessageCompanion.scala:103)
>   at com.example.protos.demo.Student$.parseFrom(Student.scala:88)     at
> com.trueaccord.scalapb.GeneratedMessageCompanion$class.parseFrom(GeneratedMessageCompanion.scala:119)
>   at com.example.protos.demo.Student$.parseFrom(Student.scala:88)     at
> StudentConsumer$.StudentConsumer$$parseLine$1(StudentConsumer.scala:24)
>   at StudentConsumer$$anonfun$1.apply(StudentConsumer.scala:30)   at
> StudentConsumer$$anonfun$1.apply(StudentConsumer.scala:30)    at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown
> Source)   at
> org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
>   at
> org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:370)
>   at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:246)
>   at
> org.apache.spark.sql.execution.SparkPlan$$anonfun$4.apply(SparkPlan.scala:240)
>   at
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
>   at
> org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:803)
>   at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)     at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)    at
> org.apache.spark.scheduler.Task.run(Task.scala:86)    at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
>   at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)

以下是我的代码......

object StudentConsumer {
  import com.trueaccord.scalapb.spark._
  import org.apache.spark.sql.{ SparkSession}
  import com.example.protos.demo._

  def main(args : Array[String]) {

    val spark = SparkSession.builder.
      master("local")
      .appName("spark session example")
      .getOrCreate()

    import spark.implicits._

    def parseLine(s: String): Student =
      Student.parseFrom(
        org.apache.commons.codec.binary.Base64.decodeBase64(s))

    val ds1 = spark.readStream.format("kafka").option("kafka.bootstrap.servers","localhost:9092").option("subscribe","student").load()

    val ds2 = ds1.selectExpr("CAST(value AS String)").as[String].map(str => parseLine(str))

    val query = ds2.writeStream
      .outputMode("append")
      .format("console")
      .start()

    query.awaitTermination()

  }
}

2 个答案:

答案 0 :(得分:2)

根据错误,您尝试解析的邮件似乎被截断或损坏。发送者是否在将它们发送到Kafka之前在base64中对protobufs进行编码?

如果是这样的话,值得将println(s)添加到parseLine中,看看你得到的内容是否符合您的预期(可能这个CAST(value as String)会因您的输入而产生意外后果。)

最后,以下Kafka / Scala Streaming / ScalaPB示例可能对您有所帮助,它假设消息以原始字节的形式发送到Kafka:

https://github.com/thesamet/sbtb2016-votes/blob/master/spark/src/main/scala/votes/Aggregator.scala

答案 1 :(得分:2)

感谢@thesamet的反馈。

以下代码有效......

  def main(args : Array[String]) {

    val spark = SparkSession.builder.
      master("local")
      .appName("spark session example")
      .getOrCreate()

    import spark.implicits._

    val ds1 = spark.readStream.format("kafka").
      option("kafka.bootstrap.servers","localhost:9092").
      option("subscribe","student").load()

    val ds2 = ds1.map(row=> row.getAs[Array[Byte]]("value")).map(Student.parseFrom(_))

    val query = ds2.writeStream
      .outputMode("append")
      .format("console")
      .start()

    query.awaitTermination()

  }