java.io.IOException:没有用于方案的文件系统:从Windows与Oracle DB连接时为D

时间:2019-08-01 04:56:51

标签: apache-spark

我在Windows计算机中使用Apache Spark,并使用jdbc6.jar驱动程序连接了oracle db。当我尝试使用此命令df.schema打印架构时,获取结果,但是当我尝试使用命令df.show显示结果时,我得到此错误

ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.io.IOException: No FileSystem for scheme: D
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)

下面是我使用的代码。

D:\Spark\spark-2.4.3-bin-hadoop2.7\bin>spark-shell --driver-class-path ojdbc6.jar --jars ojdbc6.jar --packages com.databricks:spark-csv_2.10:1.4.0

scala> sc.addJar("D:/Spark/spark-2.4.3-bin-hadoop2.7/bin/ojdbc6.jar")

scala> val sqlContext=new org.apache.spark.sql.SQLContext(sc)

**警告:有一项弃用警告;使用-deprecation重新运行以获取详细信息

sqlContext: org.apache.spark.sql.SQLContext = org.apache.spark.sql.SQLContext@42fc744**

scala> val df = sqlContext.load("jdbc",Map("url" -> "jdbc:oracle:thin:vision/vision123@10.16.1.101:1521:vision", "dbtable" -> "vision_business_day"))

警告:有一个弃用警告;使用-deprecation重新运行以获取详细信息 df:org.apache.spark.sql.DataFrame = [COUNTRY:字符串,LE_BOOK:字符串...还有1个字段]

scala> df.show
[Stage 0:>                                                          (0 + 1) / 1]19/08/02 11:44:46 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
        java.io.IOException: No FileSystem for scheme: D
                at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660)
                at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
                at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
                at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
                at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
                at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
                at org.apache.spark.util.Utils$.getHadoopFileSystem(Utils.scala:1866)
                at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:721)
                at org.apache.spark.util.Utils$.fetchFile(Utils.scala:509)

0 个答案:

没有答案