InvalidResourceRequestException在群集模式下使用hadoop 2.4中的yarn运行Spark时发生异常

时间:2014-10-15 16:12:38

标签: hadoop bigdata apache-spark yarn

Apache spark 1.1.0hadoop 2.4

一起使用

也是我的cluster is on CDH 5.1.3

我尝试使用以下命令启动spark with yarn

./spark-shell --master yarn 
./spark-shell --master yarn-client

我遇到以下异常:

  

14/10/15 21:33:32 INFO cluster.YarnClientSchedulerBackend:Application   ASM的报告:            appMasterRpcPort:0            appStartTime:1413388999108            yarnAppState:RUNNING

     

14/10/15 21:33:44 ERROR cluster.YarnClientSchedulerBackend:Yarn   申请已经结束:FAILED

     

======节点管理器异常====================================== ==

     

引起:   org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException):   无效的资源请求,请求的内存< 0,或请求的存储器>   max configured,requestedMemory = 1408,maxMemory = 1024 at   org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:228)     在   org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.validateResourceRequests(RMServerUtils.java:80)     在   org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:444)     在   org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)     在   org.apache.hadoop.yarn.proto.ApplicationMasterProtocol $ ApplicationMasterProtocolService $ 2.callBlockingMethod(ApplicationMasterProtocol.java:99)     在   org.apache.hadoop.ipc.ProtobufRpcEngine $服务器$ ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)     在org.apache.hadoop.ipc.RPC $ Server.call(RPC.java:1026)at at   org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1986)at at   org.apache.hadoop.ipc.Server $ Handler $ 1.run(Server.java:1982)at at   java.security.AccessController.doPrivileged(Native Method)at   javax.security.auth.Subject.doAs(Subject.java:396)at   org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)     在org.apache.hadoop.ipc.Server $ Handler.run(Server.java:1980)

     

at org.apache.hadoop.ipc.Client.call(Client.java:1410)at at   org.apache.hadoop.ipc.Client.call(Client.java:1363)at   org.apache.hadoop.ipc.ProtobufRpcEngine $ Invoker.invoke(ProtobufRpcEngine.java:206)     在$ Proxy11.allocate(未知来源)at   org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.allocate(ApplicationMasterProtocolPBClientImpl.java:77)     ......还有20个

1 个答案:

答案 0 :(得分:2)

根据您的YARN配置,应用程序可以请求容器的最大内存为1024MB。但是Spark客户端正在请求一个1408MB的容器。更改spark的配置文件以请求更少的RAM或提高YARN中的最大内存。