SparkSql无法连接到Windows FS中的本地文件

时间:2019-05-31 05:22:36

标签: windows spark-shell

我在本地Windows系统中安装了Apache Spark。我尝试使用spark shell从Windows系统读取本地文件。但是它给出了“输入路径不存在”的异常。请建议如何使用spark-读取Windows本地文件。外壳。

我尝试了所有这些方式

sc.textFile("file:///D:/test/abc.txt").count()
sc.textFile("file:///D://test//abc.txt").count()

sc.textFile("file:///D:/test/abc.txt").count()

org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/D:/test/abc.txt
  at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:287)
  at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
  at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
  at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:200)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
  at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:46)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253)
  at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251)
  at scala.Option.getOrElse(Option.scala:121)
  at org.apache.spark.rdd.RDD.partitions(RDD.scala:251)
  at org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)
  at org.apache.spark.rdd.RDD.count(RDD.scala:1168)
  ... 49 elided

0 个答案:

没有答案