提交任务后,发生以下异常。
我觉得问题是在某些配置设置中,将依赖项转移到hdfs花费的时间太长。所有操作都必须上传依赖项吗?如何禁止此操作,或如何禁止检查此时间戳?
主要任务是为bulkload生成HFile。Main类位于底部。
19/07/11 16:12:03 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /user/production/.staging/job_1562356395340_0860
19/07/11 16:12:43 INFO input.FileInputFormat: Total input files to process : 95
19/07/11 16:12:43 INFO mapreduce.JobSubmitter: number of splits:95
19/07/11 16:12:43 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1562356395340_0860
19/07/11 16:12:43 INFO mapreduce.JobSubmitter: Executing with tokens: [Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:ns2, Ident: (token for production: HDFS_DELEGATION_TOKEN owner=production@SCKDC, renewer=yarn, realUser=, issueDate=1562832715610, maxDate=1563437515610, sequenceNumber=7655, masterKeyId=356), Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:ns1, Ident: (token for production: HDFS_DELEGATION_TOKEN owner=production@SCKDC, renewer=yarn, realUser=, issueDate=1562832715653, maxDate=1563437515653, sequenceNumber=9621, masterKeyId=380), Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:router-fed, Ident: (token for production: HDFS_DELEGATION_TOKEN owner=production@SCKDC, renewer=yarn, realUser=, issueDate=1562832715308, maxDate=1563437515308, sequenceNumber=7107, masterKeyId=2988), Kind: HBASE_AUTH_TOKEN, Service: fa508d57-62ba-4e15-9136-9c01bf785194, Ident: ((username=production@SCKDC, keyId=112, issueDate=1562832714823, expirationDate=1563437514823, sequenceNumber=21)), Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:ns8, Ident: (token for production: HDFS_DELEGATION_TOKEN owner=production@SCKDC, renewer=yarn, realUser=, issueDate=1562832715823, maxDate=1563437515823, sequenceNumber=6851, masterKeyId=346), Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:ns7, Ident: (token for production: HDFS_DELEGATION_TOKEN owner=production@SCKDC, renewer=yarn, realUser=, issueDate=1562832715683, maxDate=1563437515683, sequenceNumber=7088, masterKeyId=370), Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:ns4, Ident: (token for production: HDFS_DELEGATION_TOKEN owner=production@SCKDC, renewer=yarn, realUser=, issueDate=1562832715547, maxDate=1563437515547, sequenceNumber=7122, masterKeyId=343), Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:ns3, Ident: (token for production: HDFS_DELEGATION_TOKEN owner=production@SCKDC, renewer=yarn, realUser=, issueDate=1562832715795, maxDate=1563437515795, sequenceNumber=7370, masterKeyId=360), Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:ns6, Ident: (token for production: HDFS_DELEGATION_TOKEN owner=production@SCKDC, renewer=yarn, realUser=, issueDate=1562832715757, maxDate=1563437515757, sequenceNumber=7048, masterKeyId=350), Kind: HDFS_DELEGATION_TOKEN, Service: ha-hdfs:ns5, Ident: (token for production: HDFS_DELEGATION_TOKEN owner=production@SCKDC, renewer=yarn, realUser=, issueDate=1562832715721, maxDate=1563437515721, sequenceNumber=7131, masterKeyId=356)]
19/07/11 16:12:44 INFO conf.Configuration: found resource resource-types.xml at file:/usr/bch/3.0.0/hadoop/etc/hadoop/resource-types.xml
19/07/11 16:12:44 INFO impl.TimelineClientImpl: Timeline service address: null
19/07/11 16:12:44 INFO impl.YarnClientImpl: Submitted application application_1562356395340_0860
19/07/11 16:12:44 INFO mapreduce.Job: The url to track the job: http://hebsjzx-schadoop-master-42-174:8088/proxy/application_1562356395340_0860/
19/07/11 16:12:44 INFO mapreduce.Job: Running job: job_1562356395340_0860
19/07/11 16:12:49 INFO mapreduce.Job: Job job_1562356395340_0860 running in uber mode : false
19/07/11 16:12:49 INFO mapreduce.Job: map 0% reduce 0%
19/07/11 16:12:49 INFO mapreduce.Job: Job job_1562356395340_0860 failed with state FAILED due to: Application application_1562356395340_0860 failed 2 times due to AM Container for appattempt_1562356395340_0860_000002 exited with exitCode: -1000
Failing this attempt.Diagnostics: [2019-07-11 16:12:42.551]Resource hdfs://router-fed/user/production/.staging/job_1562356395340_0860/libjars changed on src filesystem (expected 1562832754025, was 1562832729761
java.io.IOException: Resource hdfs://router-fed/user/production/.staging/job_1562356395340_0860/libjars changed on src filesystem (expected 1562832754025, was 1562832729761
at org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:273)
at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:67)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:414)
at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:411)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:411)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.doDownloadCall(ContainerLocalizer.java:242)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:235)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer$FSDownloadWrapper.call(ContainerLocalizer.java:223)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
For more detailed output, check the application tracking page: http://hebsjzx-schadoop-master-42-174:8088/cluster/app/application_1562356395340_0860 Then click on links to logs of each attempt.
. Failing the application.
19/07/11 16:12:49 INFO mapreduce.Job: Counters: 0
Job job = Job.getInstance(conf, JOB_NAME);
job.setJarByClass(DetailedListToHBaseDriver.class);
job.setMapperClass(DetailedListToHBaseMapper.class);
job.setMapOutputKeyClass(ImmutableBytesWritable.class);
job.setMapOutputValueClass(Put.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(HFileOutputFormat2.class);
FileInputFormat.addInputPath(job, new Path(input));
FileOutputFormat.setOutputPath(job, new Path(output));
Connection conn = SystemUtils.createHBaseConnection(conf);
TableName tableName = TableName.valueOf(name);
HFileOutputFormat2.configureIncrementalLoad(job,
conn.getTable(tableName), conn.getRegionLocator(tableName));