TestDFSIO运行永远不会完成

时间:2019-03-28 13:57:16

标签: hadoop

我正在使用一些非常重的铁运行TestDFSIO基准测试。 8台服务器,配备双20核Intel黄金CPU和786GB RAM,以及SSD存储。

然而,TestDFSIO运行从未完成。所使用的hadoop发行版是hortonworks 3.1,我承认我没有进行安装,但是是由具有安装hadoop经验的人完成的。

由于这是功能强大的硬件,并且写入的数据量仅为2.5GB,所以我希望testDFSIO运行会很快结束

已经30分钟了,跑步仍在继续(或者不确定,我不确定)

这是我第三次尝试

我将如何完成运行,对您的帮助将不胜感激。或者我如何找出导致运行挂起的原因

谢谢

root@centos1 ~]# sudo -u hdfs yarn jar /usr/hdp/3.1.0.0-78/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-3.1.1.3.1.0.0-78-tests.jar TestDFSIO -write -nrFiles 50 -fileSize 50
java.lang.NoClassDefFoundError: junit/framework/TestCase
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
        at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at org.apache.hadoop.test.MapredTestDriver.<init>(MapredTestDriver.java:109)
        at org.apache.hadoop.test.MapredTestDriver.<init>(MapredTestDriver.java:61)
        at org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:147)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
Caused by: java.lang.ClassNotFoundException: junit.framework.TestCase
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        ... 21 more
2019-03-28 13:44:54,869 INFO fs.TestDFSIO: TestDFSIO.1.8
2019-03-28 13:44:54,870 INFO fs.TestDFSIO: nrFiles = 50
2019-03-28 13:44:54,871 INFO fs.TestDFSIO: nrBytes (MB) = 50.0
2019-03-28 13:44:54,871 INFO fs.TestDFSIO: bufferSize = 1000000
2019-03-28 13:44:54,871 INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO
2019-03-28 13:44:55,501 INFO fs.TestDFSIO: creating control file: 52428800 bytes, 50 files
2019-03-28 13:44:56,209 INFO fs.TestDFSIO: created control files for: 50 files
2019-03-28 13:44:56,378 INFO client.RMProxy: Connecting to ResourceManager at centos1.demoserver.poc/10.103.40.30:8050
2019-03-28 13:44:56,514 INFO client.AHSProxy: Connecting to Application History server at centos2.demoserver.poc/10.103.40.32:10200
2019-03-28 13:44:56,539 INFO client.RMProxy: Connecting to ResourceManager at centos1.demoserver.poc/10.103.40.30:8050
2019-03-28 13:44:56,539 INFO client.AHSProxy: Connecting to Application History server at centos2.demoserver.poc/10.103.40.32:10200
2019-03-28 13:44:56,685 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /user/hdfs/.staging/job_1553717055364_0008
2019-03-28 13:44:56,761 INFO mapred.FileInputFormat: Total input files to process : 50
2019-03-28 13:44:56,792 INFO mapreduce.JobSubmitter: number of splits:50
2019-03-28 13:44:56,817 INFO Configuration.deprecation: yarn.resourcemanager.zk-num-retries is deprecated. Instead, use hadoop.zk.num-retries
2019-03-28 13:44:56,817 INFO Configuration.deprecation: yarn.resourcemanager.zk-address is deprecated. Instead, use hadoop.zk.address
2019-03-28 13:44:56,817 INFO Configuration.deprecation: io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
2019-03-28 13:44:56,817 INFO Configuration.deprecation: yarn.resourcemanager.zk-timeout-ms is deprecated. Instead, use hadoop.zk.timeout-ms
2019-03-28 13:44:56,817 INFO Configuration.deprecation: yarn.resourcemanager.zk-acl is deprecated. Instead, use hadoop.zk.acl
2019-03-28 13:44:56,818 INFO Configuration.deprecation: yarn.resourcemanager.zk-retry-interval-ms is deprecated. Instead, use hadoop.zk.retry-interval-ms
2019-03-28 13:44:56,818 INFO Configuration.deprecation: yarn.resourcemanager.display.per-user-apps is deprecated. Instead, use yarn.webapp.filter-entity-list-by-user
2019-03-28 13:44:56,818 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled
2019-03-28 13:44:56,912 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1553717055364_0008
2019-03-28 13:44:56,915 INFO mapreduce.JobSubmitter: Executing with tokens: []
2019-03-28 13:44:57,052 INFO conf.Configuration: found resource resource-types.xml at file:/etc/hadoop/3.1.0.0-78/0/resource-types.xml
2019-03-28 13:44:57,106 INFO impl.YarnClientImpl: Submitted application application_1553717055364_0008
2019-03-28 13:44:57,184 INFO mapreduce.Job: The url to track the job: http://centos1.demoserver.poc:8088/proxy/application_1553717055364_0008/
2019-03-28 13:44:57,185 INFO mapreduce.Job: Running job: job_1553717055364_0008

0 个答案:

没有答案