使用Apache Spark 1.4.0写入Oracle数据库

时间:2015-07-08 08:22:45

标签: oracle scala jdbc apache-spark

我正在尝试使用Spark 1.4.0 DataFrame.write.jdbc()函数将一些数据写入我们的Oracle数据库。

用于从Oracle数据库读取数据到DataFrame对象的对称 read.jdbc()函数运行良好。然而,当我写回数据帧时(我也尝试将从数据库设置 CverWrite 获得的完全相同的对象写入true)给出以下异常:

Exception in thread "main" java.sql.SQLSyntaxErrorException: ORA-00902: Ungültiger Datentyp

    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:450)
    at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:399)
    at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:1017)
    at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:655)
    at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:249)
    at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:566)
    at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:215)
    at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:58)
    at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:943)
    at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1075)
    at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3820)
    at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3897)
    at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1361)
    at org.apache.spark.sql.DataFrameWriter.jdbc(DataFrameWriter.scala:252)
    at main3$.main(main3.scala:72)
    at main3.main(main3.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)

该表有2个基本字符串列。当它们是Integer时,它也可以写它。

实际上,当我走得更深时,我意识到它将 StringType 映射到" TEXT"这是Oracle无法识别的(应该是" VARCHAR"而不是)。代码来自jdbc.scala,可以在GitHub找到:

def schemaString(df: DataFrame, url: String): String = {
      val sb = new StringBuilder()
      val dialect = JdbcDialects.get(url)
      df.schema.fields foreach { field => {
        val name = field.name
        val typ: String =
          dialect.getJDBCType(field.dataType).map(_.databaseTypeDefinition).getOrElse(
          field.dataType match {
            case IntegerType => "INTEGER"
            case LongType => "BIGINT"
            case DoubleType => "DOUBLE PRECISION"
            case FloatType => "REAL"
            case ShortType => "INTEGER"
            case ByteType => "BYTE"
            case BooleanType => "BIT(1)"
            case StringType => "TEXT"
            case BinaryType => "BLOB"
            case TimestampType => "TIMESTAMP"
            case DateType => "DATE"
            case DecimalType.Unlimited => "DECIMAL(40,20)"
            case _ => throw new IllegalArgumentException(s"Don't know how to save $field to JDBC")
          })
        val nullable = if (field.nullable) "" else "NOT NULL"
        sb.append(s", $name $typ $nullable")
      }}
      if (sb.length < 2) "" else sb.substring(2)
    }

所以问题是我在某处错了,或者SparkSQL不支持Oracle,我应该安装一个插件来将SparkSQL与Oracle一起使用吗?

我的简单主要是:

val conf = new SparkConf().setAppName("Parser").setMaster("local[*]")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)



val reader = sqlContext.read
val frame = reader.jdbc(url,"STUDENTS",connectionprop)

frame.printSchema()
frame.show()


val row = Row("3","4")


val struct =
  StructType(
    StructField("ONE", StringType, true) ::
      StructField("TWO", StringType, true) :: Nil)

val arr = Array(row)
val rddRow = sc.parallelize(arr)
val dframe = sqlContext.createDataFrame(rddRow,struct
)
dframe.printSchema()
dframe.show()

dframe.write.jdbc(url,"STUDENTS",connectionprop)

3 个答案:

答案 0 :(得分:10)

实际答案 - 使用1.4.0中的现有DataFrame.write.jdbc()实现无法回写Oracle但是如果您不介意升级到Spark 1.5则有一点点有点hackish做的方式。 如上所述,here存在两个问题:

简单的一种 - 检查表存在的火花方式与oracle不兼容

SELECT 1 FROM $table LIMIT 1

可以通过直接保存表实用程序方法轻松避免

org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils.saveTable(df, url, table, props)

而且很难(正如您所猜测的那样) - 没有开箱即用的Oracle特定数据类型方言。采用相同的文章解决方案:

import org.apache.spark.sql.jdbc.{JdbcDialects, JdbcType, JdbcDialect}
import org.apache.spark.sql.types._

  val OracleDialect = new JdbcDialect {
    override def canHandle(url: String): Boolean = url.startsWith("jdbc:oracle") || url.contains("oracle")

    override def getJDBCType(dt: DataType): Option[JdbcType] = dt match {
      case StringType => Some(JdbcType("VARCHAR2(255)", java.sql.Types.VARCHAR))
      case BooleanType => Some(JdbcType("NUMBER(1)", java.sql.Types.NUMERIC))
      case IntegerType => Some(JdbcType("NUMBER(10)", java.sql.Types.NUMERIC))
      case LongType => Some(JdbcType("NUMBER(19)", java.sql.Types.NUMERIC))
      case DoubleType => Some(JdbcType("NUMBER(19,4)", java.sql.Types.NUMERIC))
      case FloatType => Some(JdbcType("NUMBER(19,4)", java.sql.Types.NUMERIC))
      case ShortType => Some(JdbcType("NUMBER(5)", java.sql.Types.NUMERIC))
      case ByteType => Some(JdbcType("NUMBER(3)", java.sql.Types.NUMERIC))
      case BinaryType => Some(JdbcType("BLOB", java.sql.Types.BLOB))
      case TimestampType => Some(JdbcType("DATE", java.sql.Types.DATE))
      case DateType => Some(JdbcType("DATE", java.sql.Types.DATE))
//      case DecimalType.Fixed(precision, scale) => Some(JdbcType("NUMBER(" + precision + "," + scale + ")", java.sql.Types.NUMERIC))
      case DecimalType.Unlimited => Some(JdbcType("NUMBER(38,4)", java.sql.Types.NUMERIC))
      case _ => None
    }
  }

    JdbcDialects.registerDialect(OracleDialect)

所以,最后,工作示例应该类似于

  val url: String = "jdbc:oracle:thin:@your_domain:1521/dbname"
  val driver: String = "oracle.jdbc.OracleDriver"
  val props = new java.util.Properties()
  props.setProperty("user", "username")
  props.setProperty("password", "userpassword")
  org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils.saveTable(dataFrame, url, "table_name", props)

答案 1 :(得分:0)

更新:从Spark 2.x开始

有一个问题,即在创建jdbc表时,每个columnName在Spark中都是双引号,因此当您尝试通过sqlPlus查询它们时,所有Oracle表columnNames都会区分大小写。

select colA from myTable; => doesn't works anymore
select "colA" from myTable; =>  works

[解决] Dataframe to Oracle creates table with case sensitive column

答案 2 :(得分:-1)

您可以使用org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils.saveTable。就像Aerondir说的那样。