Hadoop无法使用lxc和ubuntu16.04

时间:2016-07-01 09:31:10

标签: java hadoop lxc ubuntu-16.04

当我在lxc容器上运行hadoop-2.6.0时。 主机PC的操作系统是ubuntu 16.04,容器操作系统也是ubuntu 16.04。

以下行是错误。

任何可以在lxc上运行hadoop的人(ubuntu16.04)?

以下行是错误代码runnning hadoop。

hadoop@master:/tmp$ ./restart-hadoop-dfs.sh
+ stop-dfs.sh
Stopping namenodes on [master]
master: stopping namenode
cluster-slave02: no datanode to stop
master: stopping datanode
cluster01-slave04: no datanode to stop
cluster01-slave03: no datanode to stop
cluster-slave01: no datanode to stop
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
+ stop-yarn.sh
stopping yarn daemons
stopping resourcemanager
cluster-slave02: stopping nodemanager
master: stopping nodemanager
cluster-slave01: stopping nodemanager
cluster01-slave04: stopping nodemanager
cluster01-slave03: stopping nodemanager
master: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
no proxyserver to stop
+ sudo rm -rf /var/hadoop/hdfs/datanode /var/hadoop/hdfs/namenode
+ hdfs namenode -format
16/07/01 09:33:58 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = master/157.82.3.142
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.0
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z
STARTUP_MSG:   java = 1.8.0_91
************************************************************/
16/07/01 09:33:58 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
16/07/01 09:33:58 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-dff9fad1-84cc-448d-a135-77ab870488a6
16/07/01 09:33:58 INFO namenode.FSNamesystem: No KeyProvider found.
16/07/01 09:33:58 INFO namenode.FSNamesystem: fsLock is fair:true
16/07/01 09:33:58 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
16/07/01 09:33:58 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
16/07/01 09:33:58 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
16/07/01 09:33:58 INFO blockmanagement.BlockManager: The block deletion will start around 2016 Jul 01 09:33:58
16/07/01 09:33:58 INFO util.GSet: Computing capacity for map BlocksMap
16/07/01 09:33:58 INFO util.GSet: VM type       = 64-bit
16/07/01 09:33:58 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
16/07/01 09:33:58 INFO util.GSet: capacity      = 2^21 = 2097152 entries

16/07/01 09:33:58 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
16/07/01 09:33:58 INFO blockmanagement.BlockManager: defaultReplication         = 2
16/07/01 09:33:58 INFO blockmanagement.BlockManager: maxReplication             = 512
16/07/01 09:33:58 INFO blockmanagement.BlockManager: minReplication             = 1
16/07/01 09:33:58 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
16/07/01 09:33:58 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
16/07/01 09:33:58 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
16/07/01 09:33:58 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
16/07/01 09:33:58 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
16/07/01 09:33:58 INFO namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)
16/07/01 09:33:58 INFO namenode.FSNamesystem: supergroup          = supergroup
16/07/01 09:33:58 INFO namenode.FSNamesystem: isPermissionEnabled = true
16/07/01 09:33:58 INFO namenode.FSNamesystem: HA Enabled: false
16/07/01 09:33:58 INFO namenode.FSNamesystem: Append Enabled: true
16/07/01 09:33:58 INFO util.GSet: Computing capacity for map INodeMap
16/07/01 09:33:58 INFO util.GSet: VM type       = 64-bit
16/07/01 09:33:58 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
16/07/01 09:33:58 INFO util.GSet: capacity      = 2^20 = 1048576 entries
16/07/01 09:33:58 INFO namenode.NameNode: Caching file names occuring more than 10 times
16/07/01 09:33:58 INFO util.GSet: Computing capacity for map cachedBlocks
16/07/01 09:33:58 INFO util.GSet: VM type       = 64-bit
16/07/01 09:33:58 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
16/07/01 09:33:58 INFO util.GSet: capacity      = 2^18 = 262144 entries
16/07/01 09:33:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
16/07/01 09:33:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
16/07/01 09:33:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
16/07/01 09:33:58 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
16/07/01 09:33:58 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
16/07/01 09:33:58 INFO util.GSet: Computing capacity for map NameNodeRetryCache
16/07/01 09:33:58 INFO util.GSet: VM type       = 64-bit
16/07/01 09:33:58 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
16/07/01 09:33:58 INFO util.GSet: capacity      = 2^15 = 32768 entries
16/07/01 09:33:58 INFO namenode.NNConf: ACLs enabled? false
16/07/01 09:33:58 INFO namenode.NNConf: XAttrs enabled? true
16/07/01 09:33:58 INFO namenode.NNConf: Maximum size of an xattr: 16384
16/07/01 09:33:58 INFO namenode.FSImage: Allocated new BlockPoolId: BP-275974701-157.82.3.142-1467365638786
16/07/01 09:33:58 INFO common.Storage: Storage directory /var/hadoop/hdfs/namenode has been successfully formatted.
16/07/01 09:33:59 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
16/07/01 09:33:59 INFO util.ExitUtil: Exiting with status 0
16/07/01 09:33:59 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/157.82.3.142
************************************************************/
+ start-dfs.sh
Starting namenodes on [master]
master: starting namenode, logging to /home/hadoop/hadoop-2.6.0/logs/hadoop-hadoop-namenode-master.out
cluster-slave02: starting datanode, logging to /home/hadoop/hadoop-2.6.0/logs/hadoop-hadoop-datanode-cluster-slave02.out
cluster-slave01: starting datanode, logging to /home/hadoop/hadoop-2.6.0/logs/hadoop-hadoop-datanode-cluster-slave01.out
cluster01-slave03: starting datanode, logging to /home/hadoop/hadoop-2.6.0/logs/hadoop-hadoop-datanode-cluster01-slave03.out
master: starting datanode, logging to /home/hadoop/hadoop-2.6.0/logs/hadoop-hadoop-datanode-master.out
cluster01-slave04: starting datanode, logging to /home/hadoop/hadoop-2.6.0/logs/hadoop-hadoop-datanode-cluster01-slave04.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop-2.6.0/logs/hadoop-hadoop-secondarynamenode-master.out
+ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/hadoop-2.6.0/logs/yarn-hadoop-resourcemanager-master.out
cluster-slave01: starting nodemanager, logging to /home/hadoop/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-cluster-slave01.out
cluster01-slave03: starting nodemanager, logging to /home/hadoop/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-cluster01-slave03.out
cluster01-slave04: starting nodemanager, logging to /home/hadoop/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-cluster01-slave04.out
cluster-slave02: starting nodemanager, logging to /home/hadoop/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-cluster-slave02.out
master: starting nodemanager, logging to /home/hadoop/hadoop-2.6.0/logs/yarn-hadoop-nodemanager-master.out
+ hadoop jar /home/hadoop/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar pi 3 3
Number of Maps  = 3
Samples per Map = 3
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Starting Job
16/07/01 09:34:24 INFO client.RMProxy: Connecting to ResourceManager at master/157.82.3.142:8032
16/07/01 09:34:25 INFO input.FileInputFormat: Total input paths to process : 3
16/07/01 09:34:25 INFO mapreduce.JobSubmitter: number of splits:3
16/07/01 09:34:25 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1467365657240_0001
16/07/01 09:34:25 INFO impl.YarnClientImpl: Submitted application application_1467365657240_0001
16/07/01 09:34:26 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1467365657240_0001/
16/07/01 09:34:26 INFO mapreduce.Job: Running job: job_1467365657240_0001
16/07/01 09:34:35 INFO ipc.Client: Retrying connect to server: master/157.82.3.142:38101. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
16/07/01 09:34:36 INFO ipc.Client: Retrying connect to server: master/157.82.3.142:38101. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
16/07/01 09:34:37 INFO ipc.Client: Retrying connect to server: master/157.82.3.142:38101. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
16/07/01 09:34:40 INFO ipc.Client: Retrying connect to server: master/157.82.3.142:36772. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
16/07/01 09:34:41 INFO ipc.Client: Retrying connect to server: master/157.82.3.142:36772. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
16/07/01 09:34:42 INFO ipc.Client: Retrying connect to server: master/157.82.3.142:36772. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
16/07/01 09:34:42 INFO mapreduce.Job: Job job_1467365657240_0001 running in uber mode : false
16/07/01 09:34:42 INFO mapreduce.Job:  map 0% reduce 0%
16/07/01 09:34:42 INFO mapreduce.Job: Job job_1467365657240_0001 failed with state FAILED due to: Application application_1467365657240_0001 failed 2 times due to AM Container for appattempt_1467365657240_0001_000002 exited with  exitCode: 255
For more detailed output, check application tracking page:http://master:8088/proxy/application_1467365657240_0001/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1467365657240_0001_02_000001
Exit code: 255
Stack trace: ExitCodeException exitCode=255:
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
        at org.apache.hadoop.util.Shell.run(Shell.java:455)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 255
Failing this attempt. Failing the application.
16/07/01 09:34:42 INFO mapreduce.Job: Counters: 0
Job Finished in 17.93 seconds
java.io.FileNotFoundException: File does not exist: hdfs://master:9000/user/hadoop/QuasiMonteCarlo_1467365661919_1251182664/out/reduce-out
        at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1122)
        at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
        at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1750)
        at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1774)
        at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
        at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
        at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
        at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

0 个答案:

没有答案