在apache Spark中读取本地Windows文件

时间:2015-06-18 18:00:39

标签: eclipse scala apache-spark

我正在尝试在本地使用spark。我的环境是

  1. Eclipse Luna具有prebuild scala支持。
  2. 创建了一个项目并转换为maven并添加了Spark core depenedency Jar。
  3. 下载WinUtils.exe并设置HADOOP_HOME路径。
  4. 我正在尝试运行的代码是

    object HelloWorld {
            def main(args: Array[String]) {
              println("Hello, world!")
        /*      val master = args.length match {
                case x: Int if x > 0 => args(0)
                case _ => "local"
              }*/
              /*val sc = new SparkContext(master, "BasicMap", System.getenv("SPARK_HOME"))*/
              val conf = new SparkConf().setAppName("HelloWorld").setMaster("local[2]").set("spark.executor.memory","1g")
              val sc = new SparkContext(conf)
             val input =  sc.textFile("C://Users//user name//Downloads//error.txt")
        // Split it up into words.
        val words = input.flatMap(line => line.split(" "))
        // Transform into pairs and count.
        val counts = words.map(word => (word, 1)).reduceByKey{case (x, y) => x + y}
              counts.foreach(println)
    

    但是当我使用sparkContext来读取文件时,它失败并出现以下错误:

    Exception in thread "main" org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/C:/Users/Downloads/error.txt
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:251)
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:270)
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:207)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
    at scala.Option.getOrElse(Option.scala:120)
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
    at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:65)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:290)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$3.apply(PairRDDFunctions.scala:290)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:148)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:109)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:286)
    at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:289)
    at com.examples.HelloWorld$.main(HelloWorld.scala:23)
    at com.examples.HelloWorld.main(HelloWorld.scala)
    

    有人可以向我提供有关如何克服此错误的见解吗?

2 个答案:

答案 0 :(得分:0)

问题是用户名是否有空间来创建所有问题。一旦我移动到没有空格的文件路径,它工作正常。

答案 1 :(得分:0)

它在w10上对我有用 火花2 如 在sparksession.builder()中 的.config(" spark.sql.warehouse.dir""文件:///”)

并在路径中使用\ _

ps一定要把文件的完整扩展名

[local] [file] [spark2]