我正在尝试对hadoop框架进行一些更改,但我仍然坚持设置我的开发环境。我已经从git克隆了hadoop并生成了所有要导入eclipse的java项目,如maven所述EclipseEnvironment。在eclipse中导入所有项目后,我生成了一个普通的java项目,它应该在hadoop中运行一个作业,我在项目的构建路径上为hadoop-common和hadoop-mapreduce-client-core设置了两个项目依赖项,所有依赖项都得到了解决。
当我运行项目时,我收到错误
2013-05-23 12:58:01,531 ERROR util.Shell (Shell.java:checkHadoopHome(230)) - Failed to detect a valid hadoop home directory
java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set.
at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:213)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:236)
at org.apache.hadoop.util.PlatformName.<clinit>(PlatformName.java:36)
at org.apache.hadoop.security.UserGroupInformation.getOSLoginModuleName(UserGroupInformation.java:314)
at org.apache.hadoop.security.UserGroupInformation.<clinit>(UserGroupInformation.java:359)
at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2512)
at org.apache.hadoop.fs.FileSystem$Cache$Key.<init>(FileSystem.java:2504)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2371)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:352)
at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:323)
at WordCount.main(WordCount.java:86)
2013-05-23 12:58:01,546 INFO util.Shell (Shell.java:isSetsidSupported(311)) - setsid exited with exit code 0
2013-05-23 12:58:01,730 WARN util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2013-05-23 12:58:02,065 ERROR security.UserGroupInformation (UserGroupInformation.java:doAs(1492)) - PriviledgedActionException as:elma (auth:SIMPLE) cause:java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
Exception in thread "main" java.io.IOException: Cannot initialize Cluster. Please check your configuration for mapreduce.framework.name and the correspond server addresses.
at org.apache.hadoop.mapreduce.Cluster.initialize(Cluster.java:119)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:81)
at org.apache.hadoop.mapreduce.Cluster.<init>(Cluster.java:74)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1229)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1489)
at org.apache.hadoop.mapreduce.Job.connect(Job.java:1225)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1253)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1277)
at WordCount.main(WordCount.java:100)
那么我怎么能根据我在eclipse中的hadoop源项目来运行新的java项目呢?
答案 0 :(得分:-1)
由于您的问题是“我如何根据maven生成的源运行hadoop?”,我将假设您已经能够成功运行vanilla Hadoop。如果你有,那么你只需要复制你自己的jar(通过eclipse生成或在命令行中使用maven)并替换你的vanilla Hadoop发行版的jar(相同版本)。这应该可以解决问题,节省您处理配置问题的时间。