如何将Array [String]转换为正确的模式?

时间:2016-03-25 10:25:46

标签: scala apache-spark apache-spark-sql

我在尝试将字段从RDD[Array[String]]转换为模式中指定的正确值以便转换为Spark SQL DataFrame时遇到了一个奇怪的问题。

我有RDD[Array[String]]StructType名为schema,它指定了几个字段的类型。到目前为止我所做的是:

sqlContext.createDataFrame(
    inputLines.map( rowValues => 
                          RowFactory.create(rowValues.zip(schema.toSeq)
                                                     .map{ case (value, struct) => 
                                                  struct.dataType match {
                                                    case BinaryType => value.toCharArray().map(ch => ch.toByte)
                                                    case ByteType => value.toByte
                                                    case BooleanType => value.toBoolean
                                                    case DoubleType => value.toDouble
                                                    case FloatType => value.toFloat
                                                    case ShortType => value.toShort
                                                    case DateType => value
                                                    case IntegerType => value.toInt
                                                    case LongType => value.toLong
                                                    case _ => value
                                                  }
                                               })), schema)

但我得到了这个例外:

java.lang.RuntimeException: Failed to convert value [Ljava.lang.Object;@6e9ffad1 (class of class [Ljava.lang.Object;}) with the type of IntegerType to JSON

调用toJSON方法时...

您是否知道发生这种情况的原因以及我该如何解决?

如上所述,我们有一个例子:

val schema = StructType(Seq(StructField("id",IntegerType),StructField("val",StringType)))
val inputLines=sc.parallelize(
      Array("1","This is a line for testing"), 
      Array("2","The second line"))

1 个答案:

答案 0 :(得分:3)

您将Array作为唯一参数传递给RowFactory.create

如果你看到它的方法签名:

public static Row create(Object ... values) 

它需要一个varargs列表。

所以你只需要使用:_*语法将数组转换为varargs列表。

sqlContext.createDataFrame(inputLines.map( rowValues => 
   Row(              // RowFactory.create is java api, use Row.apply instead
      rowValues.zip(schema.toSeq)
                .map{ case (value, struct) => struct.dataType match {
                   case BinaryType => value.toCharArray().map(ch => ch.toByte)
                   case ByteType => value.toByte
                   case BooleanType => value.toBoolean
                   case DoubleType => value.toDouble
                   case FloatType => value.toFloat
                   case ShortType => value.toShort
                   case DateType => value
                   case IntegerType => value.toInt
                   case LongType => value.toLong
                   case _ => value
                   }
                 } : _*            // <-- make varargs here
   )),
   schema)

在上面的代码中,我用Row.apply替换了RowFactory.create,并将参数作为varargs传递。

或者,使用Row.fromSeq方法。

重构一下:

def convertTypes(value: String, struct: StructField): Any = struct.dataType match {
  case BinaryType => value.toCharArray().map(ch => ch.toByte)
  case ByteType => value.toByte
  case BooleanType => value.toBoolean
  case DoubleType => value.toDouble
  case FloatType => value.toFloat
  case ShortType => value.toShort
  case DateType => value
  case IntegerType => value.toInt
  case LongType => value.toLong
  case _ => value
}

val schema = StructType(Seq(StructField("id",IntegerType),
                            StructField("val",StringType)))

val inputLines = sc.parallelize(Array(Array("1","This is a line for testing"), 
                                      Array("2","The second line")))

val rowRdd = inputLines.map{ array => 
  Row.fromSeq(array.zip(schema.toSeq)
                   .map{ case (value, struct) => 
                           convertTypes(value, struct) })
}

val df = sqlContext.createDataFrame(rowRdd, schema)

df.toJSON.collect 
// Array({"id":1,"val":"This is a line for testing"},
//       {"id":2,"val":"The second line"})