Spark用于从SQL Server DB获取表的架构。由于数据类型不匹配,在使用此架构创建Hive表时遇到了问题。我们如何在Spark Scala中将SQL Server数据类型转换为Hive数据类型。
val df = sqlContext.read.format("jdbc")
.option("url", "jdbc:sqlserver://host:port;databaseName=DB")
.option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver")
.option("dbtable", "schema.tableName")
.option("user", "Userid").option("password", "pswd")
.load().schema
答案 0 :(得分:1)
谢谢,得到了解决方案。创建了一种检查数据类型的方法,如下所示。
def sqlToHiveDatatypeMapping(inputDatatype: String): String = inputDatatype match {
case "numeric" => "int"
case "bit" => "smallint"
case "long" => "bigint"
case "dec_float" => "double"
case "money" => "double"
case "smallmoney" => "double"
case "real" => "double"
case "char" => "string"
case "nchar" => "string"
case "varchar" => "string"
case "nvarchar" => "string"
case "text" => "string"
case "ntext" => "string"
case "binary" => "binary"
case "varbinary" => "binary"
case "image" => "binary"
case "date" => "date"
case "datetime" => "timestamp"
case "datetime2" => "timestamp"
case "smalldatetime" => "timestamp"
case "datetimeoffset" => "timestamp"
case "timestamp" => "timestamp"
case "time" => "timestamp"
case "clob" => "string"
case "blob" => "binary"
case _ => "string"
}
val columns = df.fields.map({field => field.name.toLowerCase+" "+sqlToHiveDatatypeMapping(field.dataType.typeName.toLowerCase)}).mkString(",")