我正在尝试使用spark-shell(val df = spark.read.csv(“ s3a:// xxxxxx”))从S3导入csv文件 spark-shell客户端连接到远程纱线群集。 它因java.lang.VerifyError失败,但是,当我从相同的yarn resourcemanager机器启动spark-shell时,它工作正常。
这是错误代码:
java.lang.VerifyError: Bad type on operand stack
Exception Details:
Location:
org/apache/hadoop/fs/s3a/S3AFileSystem.s3GetFileStatus(Lorg/apache/hadoop/fs/Path;Ljava/lang/String;Ljava/util/Set;)Lorg/apache/hadoop/fs/s3a/S3AFileStatus; @274: invokestatic
Reason:
Type 'com/amazonaws/AmazonServiceException' (current frame, stack[2]) is not assignable to 'com/amazonaws/SdkBaseException'
Current Frame:
bci: @274
flags: { }
locals: { 'org/apache/hadoop/fs/s3a/S3AFileSystem', 'org/apache/hadoop/fs/Path', 'java/lang/String', 'java/util/Set', 'java/lang/String', 'com/amazonaws/AmazonServiceException' }
stack: { 'java/lang/String', 'java/lang/String', 'com/amazonaws/AmazonServiceException' }
spark.master yarn
spark.hadoop.fs.s3a.server-side-encryption-algorithm SSE-KMS
spark.hadoop.fs.s3a.server-side-encryption.key xxxxxxxxxxxxxxxxxxxxxxxxxxx
spark.hadoop.fs.s3a.enableServerSideEncryption true
com.amazonaws.services.s3.enableV4 true
spark.hadoop.fs.s3a.impl org.apache.hadoop.fs.s3a.S3AFileSystem
spark.blockManager.port 20020
spark.driver.port 20020
spark.master.ui.port 4048
spark.ui.port 4041
spark.port.maxRetries 100
spark.yarn.jars hdfs://hdfs-master:4040/spark/jars/*
spark.driver.extraJavaOptions=-Dlog4j.configuration=/usr/local/spark/conf/log4j.properties
spark.executor.extraJavaOptions=-Dlog4j.configuration=/usr/local/spark/conf/log4j.properties
spark.eventLog.enabled true
spark.eventLog.dir hdfs://hdfs-master:4040/spark-logs
spark.yarn.app.container.log.dir /home/aws_install/hadoop/logdir
hadoop_add_to_classpath_tools hadoop-aws
您知道问题的根源是什么吗?
答案 0 :(得分:1)
类路径问题的提示。
hadooprc更改的一个问题是它仅更改您的本地环境,而不更改集群其余部分的环境。但实际上,您到org/apache/hadoop/fs/s3a/S3AFileSystem.s3GetFileStatus
的事实意味着S3A jar正在加载-但是JVM本身有问题
可能在类路径上有两个AWS开发工具包副本,因此,由于混合JAR,因此刚提出的AmazonServiceException
不是SdkBaseException
的子类。