使用Pivotal进行Hadoop集群部署

时间:2015-01-01 15:43:17

标签: hadoop mapreduce yarn

我正在尝试通过Pivotal分发部署Hadoop集群。

同样,我正在关注下面提到的链接 http://pivotalhd.docs.pivotal.io/doc/2100/webhelp/topics/ManuallyInstallingandUsingPivotalHD21Stack.html

部署配置: 1)phd1.xyz.com - NameNode,ResourceManager 2)phd2.xyz.com - DataNode,NodeManager

我有上面提到的UP和Running服务,也能访问HDFS文件系统但不能在集群上执行作业

上面提供的链接没有提到作业是否必须通过root或hdfs用户执行,所以我尝试了两种方式

  1. 通过root用户执行作业时出错
  2.   

    hadoop jar / usr / lib / gphd / hadoop-mapreduce / hadoop-mapreduce-examples-2.2.0-gphd-3.1.0.0.jar   pi 2 200

    发生以下错误:

    >     Number of Maps  = 2
    >     Samples per Map = 200
    >     org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE,
    > inode="/user":hdfs:supergroup:drwxr-xr-x
    >             at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:234)
    >             at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:214)
    >             at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:158)
    >             at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5389)
    >             at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5371)
    >             at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:5345)
    >             at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:3583)
    >             at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3553)
    >             at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3525)
    >             at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:745)
    >             at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:550)
    >             at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:63031)
    >             at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
    >             at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
    >             at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
    >             at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
    >             at java.security.AccessController.doPrivileged(Native Method)
    >             at javax.security.auth.Subject.doAs(Subject.java:415)
    >             at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
    >             at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)
    >     
    >             at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    >             at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    >             at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    >             at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    >             at 
    
    1. 通过hdfs用户执行作业时出错
    2. sudo -u hdfs hadoop jar /usr/lib/gphd/hadoop-mapreduce/hadoop-mapreduce-examples-2.2.0-gphd-3.1.0.0.jar pi 2 200
      

      发生以下错误:

      > Number of Maps  = 2
      >     Samples per Map = 200
      >     Wrote input for Map #0
      >     Wrote input for Map #1
      >     Starting Job
      >     15/01/01 20:48:20 INFO client.RMProxy: Connecting to ResourceManager at phd1.xyz.com/10.44.189.6:8050
      >     15/01/01 20:48:21 INFO input.FileInputFormat: Total input paths to process : 2
      >     15/01/01 20:48:21 INFO mapreduce.JobSubmitter: number of splits:2
      >     15/01/01 20:48:21 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
      >     15/01/01 20:48:21 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
      >     15/01/01 20:48:21 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use
      > mapreduce.map.speculative
      >     15/01/01 20:48:21 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
      >     15/01/01 20:48:21 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use
      > mapreduce.job.output.value.class
      >     15/01/01 20:48:21 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use
      > mapreduce.reduce.speculative
      >     15/01/01 20:48:21 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use
      > mapreduce.job.map.class
      >     15/01/01 20:48:21 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
      >     15/01/01 20:48:21 INFO Configuration.deprecation: mapreduce.reduce.class is deprecated. Instead, use
      > mapreduce.job.reduce.class
      >     15/01/01 20:48:21 INFO Configuration.deprecation: mapreduce.inputformat.class is deprecated. Instead, use
      > mapreduce.job.inputformat.class
      >     15/01/01 20:48:21 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
      >     15/01/01 20:48:21 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use
      > mapreduce.output.fileoutputformat.outputdir
      >     15/01/01 20:48:21 INFO Configuration.deprecation: mapreduce.outputformat.class is deprecated. Instead, use
      > mapreduce.job.outputformat.class
      >     15/01/01 20:48:21 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
      >     15/01/01 20:48:21 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use
      > mapreduce.job.output.key.class
      >     15/01/01 20:48:21 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use
      > mapreduce.job.working.dir
      >     15/01/01 20:48:21 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1420122968684_0002
      >     15/01/01 20:48:22 INFO impl.YarnClientImpl: Submitted application application_1420122968684_0002 to ResourceManager at
      > phd1.xyz.com/10.44.189.6:8050
      >     15/01/01 20:48:22 INFO mapreduce.Job: The url to track the job: http://phd1.persistent.co.in:8088/proxy/application_1420122968684_0002/
      >     15/01/01 20:48:22 INFO mapreduce.Job: Running job: job_1420122968684_0002
      >     15/01/01 20:48:26 INFO mapreduce.Job: Job job_1420122968684_0002 running in uber mode : false
      >     15/01/01 20:48:26 INFO mapreduce.Job:  map 0% reduce 0%
      >     15/01/01 20:48:26 INFO mapreduce.Job: Job job_1420122968684_0002 failed with state FAILED due to: Application
      > application_1420122968684_0002 failed 2 times due to AM Container for
      > appattempt_1420122968684_0002_000002 exited with  exitCode: 1 due to:
      > Exception from container-launch:
      >     org.apache.hadoop.util.Shell$ExitCodeException:
      >             at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
      >             at org.apache.hadoop.util.Shell.run(Shell.java:379)
      >             at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
      >             at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
      >             at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
      >             at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
      >             at java.util.concurrent.FutureTask.run(FutureTask.java:262)
      >             at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
      >             at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
      >             at java.lang.Thread.run(Thread.java:745)
      >     
      >     
      >     .Failing this attempt.. Failing the application.
      >     15/01/01 20:48:26 INFO mapreduce.Job: Counters: 0
      >     Job Finished in 5.973 seconds
      >     java.io.FileNotFoundException: File does not exist: hdfs://phd1.xyz.com:8020/user/hdfs/QuasiMonteCarlo_1420125497811_11863122/out/reduce-out
      >             at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1120)
      >             at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1112)
      >             at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
      >             at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1112)
      >             at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1749)
      >             at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1773)
      >             at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
      >             at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
      >             at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
      >             at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
      >             at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      >             at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      >             at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      >             at java.lang.reflect.Method.invoke(Method.java:606)
      >             at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
      >             at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
      >             at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
      >             at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      >             at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
      >             at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
      >             at java.lang.reflect.Method.invoke(Method.java:606)
      >             at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
      

      请告诉我如何解决此错误。

      由于

0 个答案:

没有答案