我有一个简单的测试用例,用于学习如何使用Table API和case /,如下所示:
import org.apache.flink.api.scala.ExecutionEnvironment
import org.apache.flink.table.api.TableEnvironment
import org.apache.flink.api.scala._
case class Person(name: String, age: Int)
object TableTest {
def main(args: Array[String]): Unit = {
val env = ExecutionEnvironment.getExecutionEnvironment
val te = TableEnvironment.getTableEnvironment(env)
val ds = env.fromCollection(Seq(Person("a",20), Person("b",40), Person("c", 60)))
te.registerDataSet("person", ds)
te.toDataSet[Person](table).print()
val table = te.sqlQuery(
"""
select name,age,
case
when age <= 20 then 'A'
when age <=40 then 'B'
when age <= 60 then 'C'
else 'D'
end as age_level
from person
""".stripMargin(' '))
te.toDataSet[Person](table).print()
}
}
运行它时,遇到以下异常,age_level
是一个计算列,我不知道为什么会发生错误
Exception in thread "main" org.apache.flink.table.api.TableException: Arity [3] of result [ArrayBuffer(String, Integer, String)] does not match the number[2] of requested type [com.flink.table.Person(name: String, age: Integer)].
at org.apache.flink.table.api.TableEnvironment.generateRowConverterFunction(TableEnvironment.scala:1165)
at org.apache.flink.table.api.BatchTableEnvironment.getConversionMapper(BatchTableEnvironment.scala:339)
at org.apache.flink.table.api.BatchTableEnvironment.translate(BatchTableEnvironment.scala:504)
at org.apache.flink.table.api.BatchTableEnvironment.translate(BatchTableEnvironment.scala:476)
at org.apache.flink.table.api.scala.BatchTableEnvironment.toDataSet(BatchTableEnvironment.scala:141)
at com.flink.table.TableTest$.main(TableTest.scala:37)
at com.flink.table.TableTest.main(TableTest.scala)
答案 0 :(得分:2)
问题是您要转换为DataSet[Person]
的表具有三个属性:(name, age, age_level)
而Person
案例类仅具有两个字段:(name, age)
。
您可以实现一个新的案例类
case class PersonWithAgeLevel(name: String, age: Int, age_level: String)
并将表转换为DataSet[PersonWithAgeLevel]
。