问题:我有两个具有相同限定路径的java类。
我正在运行一个EMR作业,我将所有依赖项包装在一个jar中并上传到S3。 EMR集群应该从S3使用这个jar。但是我收到了错误:
Exception in thread "main" java.lang.IllegalAccessError: class org.apache.hadoop.fs.s3native.AbstractNativeS3FileSystemStore cannot access its superinterface org.apache.hadoop.fs.s3native.NativeFileSystemStore at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:800) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) at java.net.URLClassLoader.access$100(URLClassLoader.java:71) at java.net.URLClassLoader$1.run(URLClassLoader.java:361) at java.net.URLClassLoader$1.run(URLClassLoader.java:355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:425) at java.lang.ClassLoader.loadClass(ClassLoader.java:358) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:270) at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:861) at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:906) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1411) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:68) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1435) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:260) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187) at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.setInputPaths(FileInputFormat.java:352) at org.apache.hadoop.mapreduce.lib.input.DelegatingInputFormat.getSplits(DelegatingInputFormat.java:110) at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:1016) at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:1033) at org.apache.hadoop.mapred.JobClient.access$700(JobClient.java:174) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:951) at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:904) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1140) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:904) at org.apache.hadoop.mapreduce.Job.submit(Job.java:501) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:531) at com.amazon.idq.chia.aws.emr.mains.BaseFilterStepMain.configureAndRunJob(BaseFilterStepMain.java:51) at com.amazon.idq.chia.aws.emr.mains.BaseFilterStepMain.run(BaseFilterStepMain.java:84) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) at com.amazon.idq.chia.aws.emr.mains.FFilterHybridStepMain.main(FFilterHybridStepMain.java:24) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:187)
我现在已经尝试过:我注意到两个版本的可见性不同。当代码期待jar2版本的NativeFileSystemStore(公共访问)类时,我认为Classloader正在从jar1中获取NativeFileSystemStore(默认访问)类。所以,我修改了构建脚本:1。解压缩jar1和jar2。 2.从jar1中删除了限制性类NativeFileSystemStore。 3.将org.apache.hadoop.fs.s3native。*从jar2移到jar1 3.将类重新打包到jar1-resolved.jar和jar2-resolved.jar 4.尝试再次运行EMR作业。
结果:仍然遇到同样的错误。
答案 0 :(得分:0)
在大多数情况下,IllegalAccessError的原因是版本不匹配。你能不能在两个文件org.apache.hadoop.fs.s3native.NativeFileSystemStore上运行“javap -version”或“javap -verbose $ classname $ | grep'major'”(你已经包含在解析后的jar中)和org.apache.hadoop.fs.s3native.AbstractNativeS3FileSystemStore(我相信你已编码)并检查它们是否匹配。