使用PySpark和JDBC驱动程序在Python中获取oracle数据时出现“java.lang.IllegalArgumentException:要求失败:溢出精度”错误

时间:2015-12-03 13:19:48

标签: oracle apache-spark pyspark ojdbc

我尝试通过Spark技术,PySpark工具连接到oracle数据库。 spark 1.5,scala-2.10.4,Pyhton3.4,ojdbc7.jar 我没有安装oracle客户端,只复制了oracle库并设置了LD_LIBRARY_PATH。 我测试,工作正常,可以使用os(Centos 7)获取数据,还有R(使用ROracle Package)和Python3.4(使用cx_Oracle)。 我在PySpark中使用了以下连接:

df=sqlContext.read.format('jdbc').options(url='jdbc:oracle:thin:UserName/Password@IP:1521/SID',dbtable="Table").load()

它连接没有问题但是当我尝试df.head()例如我遇到了这个错误

15/12/03 16:41:52 INFO SparkContext: Starting job: showString at NativeMethodAccessorImpl.java:-2
15/12/03 16:41:52 INFO DAGScheduler: Got job 2 (showString at NativeMethodAccessorImpl.java:-2) with 1 output partitions
15/12/03 16:41:52 INFO DAGScheduler: Final stage: ResultStage 2(showString at NativeMethodAccessorImpl.java:-2)
15/12/03 16:41:52 INFO DAGScheduler: Parents of final stage: List()
15/12/03 16:41:52 INFO DAGScheduler: Missing parents: List()
15/12/03 16:41:52 INFO DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[5] at showString at NativeMethodAccessorImpl.java:-2), which has no missing parents
15/12/03 16:41:52 INFO MemoryStore: ensureFreeSpace(5872) called with curMem=17325, maxMem=13335873454
15/12/03 16:41:52 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 5.7 KB, free 12.4 GB)
15/12/03 16:41:52 INFO MemoryStore: ensureFreeSpace(2789) called with curMem=23197, maxMem=13335873454
15/12/03 16:41:52 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 2.7 KB, free 12.4 GB)
15/12/03 16:41:52 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on localhost:41646 (size: 2.7 KB, free: 12.4 GB)
15/12/03 16:41:52 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:861
15/12/03 16:41:52 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (MapPartitionsRDD[5] at showString at NativeMethodAccessorImpl.java:-2)
15/12/03 16:41:52 INFO TaskSchedulerImpl: Adding task set 2.0 with 1 tasks
15/12/03 16:41:52 INFO TaskSetManager: Starting task 0.0 in stage 2.0 (TID 2, localhost, PROCESS_LOCAL, 1929 bytes)
15/12/03 16:41:52 INFO Executor: Running task 0.0 in stage 2.0 (TID 2)
15/12/03 16:41:52 INFO JDBCRDD: closed connection
15/12/03 16:41:52 ERROR Executor: Exception in task 0.0 in stage 2.0 (TID 2)
java.lang.IllegalArgumentException: requirement failed: Overflowed precision
...

当我搜索时,发现它是 github 中解决的bug,应该通过下面的代码行来解决

case java.sql.Types.NUMERIC       => DecimalType.bounded(precision + scala, scale)

但是我检查了我的JDBCRDD.scala文件中存在的内容。
有什么方法可以解决这个问题吗?

1 个答案:

答案 0 :(得分:1)

我曾与Spark开发人员进行过谈判,他说这是一个错误,我们应该等待新版本或使用jira spark版本。