GoogleHadoopFileSystem无法强制转换为hadoop FileSystem?

时间:2015-07-17 15:07:31

标签: apache-spark google-hadoop

最初的问题是 trying to deploy spark 1.4 on Google Cloud。下载并设置

SPARK_HADOOP2_TARBALL_URI='gs://my_bucket/my-images/spark-1.4.1-bin-hadoop2.6.tgz'
使用bdutil进行部署很好;但是,当试图调用SqlContext.parquetFile(“gs://my_bucket/some_data.parquet”)时,它会遇到以下异常:

 java.lang.ClassCastException: com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem cannot be cast to org.apache.hadoop.fs.FileSystem
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2595)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:169)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:354)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.hive.metastore.Warehouse.getFs(Warehouse.java:112)
at org.apache.hadoop.hive.metastore.Warehouse.getDnsPath(Warehouse.java:144)
at org.apache.hadoop.hive.metastore.Warehouse.getWhRoot(Warehouse.java:159)

令我困惑的是,GoogleHadoopFileSystem应该是org.apache.hadoop.fs.FileSystem的子类,我甚至在同一个spark-shell实例中进行了验证:

scala> var gfs = new com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem()
gfs: com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem = com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem@46f105c

scala> gfs.isInstanceOf[org.apache.hadoop.fs.FileSystem]
res3: Boolean = true

scala> gfs.asInstanceOf[org.apache.hadoop.fs.FileSystem]
res4: org.apache.hadoop.fs.FileSystem = com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem@46f105c

我有没有错过任何解决方法?提前谢谢!

更新:这是我的部署bdutil(版本1.3.1)设置:

import_env hadoop2_env.sh
import_env extensions/spark/spark_env.sh
CONFIGBUCKET="my_conf_bucket"
PROJECT="my_proj"
GCE_IMAGE='debian-7-backports'
GCE_MACHINE_TYPE='n1-highmem-4'
GCE_ZONE='us-central1-f'
GCE_NETWORK='my-network'
GCE_MASTER_MACHINE_TYPE='n1-standard-2'
PREEMPTIBLE_FRACTION=1.0
PREFIX='my-hadoop'
NUM_WORKERS=8
USE_ATTACHED_PDS=true
WORKER_ATTACHED_PDS_SIZE_GB=200
MASTER_ATTACHED_PD_SIZE_GB=200
HADOOP_TARBALL_URI="gs://hadoop-dist/hadoop-2.6.0.tar.gz"
SPARK_MODE="yarn-client"
SPARK_HADOOP2_TARBALL_URI='gs://my_conf_bucket/my-images/spark-1.4.1-bin-hadoop2.6.tgz'

1 个答案:

答案 0 :(得分:2)

简答

确实它与IsolatedClientLoader有关,我们已经找到了根本原因并验证了修复。我提交https://issues.apache.org/jira/browse/SPARK-9206来跟踪这个问题,并通过一个简单的修复从我的fork成功构建了一个干净的Spark tarball:https://github.com/apache/spark/pull/7549

有一些短期选择:

  1. 暂时使用Spark 1.3.1。
  2. 在bdutil部署中,使用HDFS作为默认文件系统(--default_fs=hdfs);您仍然可以直接在作业中指定gs://路径,只需将HDFS用于中间数据和暂存文件。但是,在这种模式下使用原始Hive存在一些轻微的不兼容性。
  3. 如果您不需要HiveContext功能,请使用原始val sqlContext = new org.apache.spark.sql.SQLContext(sc)而不是HiveContext。
  4. git clone https://github.com/dennishuo/spark并运行./make-distribution.sh --name my-custom-spark --tgz --skip-java-test -Pyarn -Phadoop-2.6 -Dhadoop.version=2.6.0 -Phive -Phive-thriftserver以获取您可以在bdutil' spark_env.sh中指定的新tarball。
  5. 长答案

    我们已经确认只有当fs.default.namefs.defaultFS设置为gs://路径时才显示,无论是否尝试从parquetFile("gs://...")加载路径或parquetFile("hdfs://..."),当fs.default.namefs.defaultFS设置为HDFS路径时,从HDFS和GCS加载数据都可以正常工作。这也是目前Spark 1.4+特有的,并且在Spark 1.3.1或更早版本中不存在。

    回归似乎已在https://github.com/apache/spark/commit/9ac8393663d759860c67799e000ec072ced76493中引入,它实际修复了先前相关的类加载问题SPARK-8368。虽然修复本身对于正常情况是正确的,但是方法IsolatedClientLoader.isSharedClass用于确定要使用哪个类加载器,并与上述提交交互以打破GoogleHadoopFileSystem类加载。

    该文件中的以下行包含com.google.*下的所有内容作为"共享类"因为Guava和可能的protobuf依赖关系确实被加载为共享库,但遗憾的是GoogleHadoopFileSystem应该被加载为" hive类"在这种情况下,就像org.apache.hadoop.hdfs.DistributedFileSystem一样。我们碰巧不幸地分享了com.google.*包命名空间。

    protected def isSharedClass(name: String): Boolean =
      name.contains("slf4j") ||
      name.contains("log4j") ||
      name.startsWith("org.apache.spark.") ||
      name.startsWith("scala.") ||
      name.startsWith("com.google") ||
      name.startsWith("java.lang.") ||
      name.startsWith("java.net") ||
      sharedPrefixes.exists(name.startsWith)
    
    ...
    
    /** The classloader that is used to load an isolated version of Hive. */
    protected val classLoader: ClassLoader = new URLClassLoader(allJars, rootClassLoader) {
      override def loadClass(name: String, resolve: Boolean): Class[_] = {
        val loaded = findLoadedClass(name)
        if (loaded == null) doLoadClass(name, resolve) else loaded
      }
    
      def doLoadClass(name: String, resolve: Boolean): Class[_] = {
        ...
        } else if (!isSharedClass(name)) {
          logDebug(s"hive class: $name - ${getResource(classToPath(name))}")
          super.loadClass(name, resolve)
        } else {
          // For shared classes, we delegate to baseClassLoader.
          logDebug(s"shared class: $name")
          baseClassLoader.loadClass(name)
        }
      }
    }
    

    可以通过将以下行添加到${SPARK_INSTALL}/conf/log4j.properties来验证这一点:

    log4j.logger.org.apache.spark.sql.hive.client=DEBUG
    

    输出显示:

    ...
    15/07/20 20:59:14 DEBUG IsolatedClientLoader: hive class: org.apache.hadoop.hdfs.DistributedFileSystem - jar:file:/home/hadoop/spark-install/lib/spark-assembly-1.4.1-hadoop2.6.0.jar!/org/apache/hadoop/hdfs/DistributedFileSystem.class
    ...
    15/07/20 20:59:14 DEBUG IsolatedClientLoader: shared class: com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem
    java.lang.RuntimeException: java.lang.ClassCastException: com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem cannot be cast to org.apache.hadoop.fs.FileSystem