没有枚举常量GET_BLOCK_LOCATIONS Hdfs

时间:2019-04-25 16:13:59

标签: scala apache-spark hdfs

我正在尝试使用Spark从HDFS读取Avro文件,但是出现异常。该文件位于HDFS上,我已在文件浏览器中对其进行了检查,以及在文件系统上使用了现存功能。 例外:

org.apache.hadoop.ipc.RemoteException: java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.fs.http.client.HttpFSFileSystem.Operation.GET_BLOCK_LOCATIONS
[info]   at org.apache.hadoop.hdfs.web.JsonUtilClient.toRemoteException(JsonUtilClient.java:88)
[info]   at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:509)
[info]   at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$200(WebHdfsFileSystem.java:135)
[info]   at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.connect(WebHdfsFileSystem.java:745)
[info]   at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.runWithRetry(WebHdfsFileSystem.java:820)
[info]   at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.access$100(WebHdfsFileSystem.java:648)
[info]   at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner$1.run(WebHdfsFileSystem.java:686)
[info]   at java.security.AccessController.doPrivileged(Native Method)
[info]   at javax.security.auth.Subject.doAs(Subject.java:422)
[info]   at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)

此错误的原因可能是什么?

这是读取数据的代码:

 private val sparkSession = SparkSession
    .builder
    .master("local[*]")
    .appName("SparkJobHDFSApp")
    .getOrCreate()

 val fs = FileSystem.get(new java.net.URI(hdfsUrl), new org.apache.hadoop.conf.Configuration())

  val fileContents = sparkSession
      .sparkContext
      .textFile(fullPath)
      .collect()
      .toList
  fileContents.foreach(println(_))

0 个答案:

没有答案