scala.MatchError:创建一个包含更多22​​个字段的数据帧&进一步写入RDBMS

时间:2017-06-08 18:10:32

标签: dataframe apache-kafka apache-spark-sql spark-streaming

首先,我解析kafka消息,然后将模式应用于消息。它按照预期打印模式,我在foreach循环中使用了rdd.toDF()。printSchema ..但是当我尝试使用JDBC连接保存数据时,它给出了一个错误:

scala.MatchError: Enrich.Streaming.Samples$Person@7d7904f6 (of class Enrich.Streaming.Samples$Person

class Person (
             name        : String,
             id       : String,
             type: String,
             .....,
             .....,
             .....,
             .....,
             32 of them
            )
extends Product {

@throws(classOf[IndexOutOfBoundsException])
override def productElement(n: Int): Any = n match {
  case 0 => name
  case 1 => id
  case 2 => type
  ....
  ....
  case 31 => ...
  case _ => throw new IndexOutOfBoundsException(n.toString())
}

override def productArity: Int = 32

override def canEqual(that: Any): Boolean = that.isInstanceOf[Person]

}

object Person extends Serializable {

def parse(str: String): Option[Person] = {
  val paramArray = str.split("\\|")
  Try(
    new Person(paramArray(0),
      paramArray(1),
      paramArray(2)
      .....
      .....
      .....
      ......

    )) match {
    case Success(trimarc) => Some(trimarc)
    case Failure(throwable) => {
      println (throwable.getMessage())
      None
    }
  }

}

}




val messages = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
  ssc, kafkaParams, topicsSet)

val sqlContext = new SQLContext(sc)
import sqlContext.implicits._

 val data = messages.map(_._2).map(Person.parse)

data.foreachRDD(rdd =>
    rdd.toDF().write.mode("append").jdbc(url,table,prop)
    )
如果有人能帮我解决问题,

真的很感激。

谢谢!

0 个答案:

没有答案