我想在我的FreeBSD-Cluster上使用两个节点运行MapReduce-Job,但是我得到以下异常
14/08/27 14:23:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/08/27 14:23:04 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
14/08/27 14:23:04 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
14/08/27 14:23:04 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
14/08/27 14:23:04 WARN mapreduce.JobSubmitter: No job jar file set. User classes may not be found. See Job or Job#setJar(String).
14/08/27 14:23:04 INFO mapreduce.JobSubmitter: Cleaning up the staging area file:/tmp/hadoop-otlam/mapred/staging/otlam968414084/.staging/job_local968414084_0001
Exception in thread "main" java.util.NoSuchElementException
at java.util.StringTokenizer.nextToken(StringTokenizer.java:349)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:565)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:534)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.checkPermissionOfOther(ClientDistributedCacheManager.java:276)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.isPublic(ClientDistributedCacheManager.java:240)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineCacheVisibilities(ClientDistributedCacheManager.java:162)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:58)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:265)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:301)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:389)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
...
当我尝试在新的MapReduce作业上运行job.watForCompletion(true);
时会发生这种情况。应该抛出NoSuchElementException,因为在那里没有调用StringTokenizer和next()中的更多元素。
我查看了源代码,并在RawLocalFileSystem.java中找到了以下代码:
/// loads permissions, owner, and group from `ls -ld`
private void loadPermissionInfo() {
IOException e = null;
try {
String output = FileUtil.execCommand(new File(getPath().toUri()),
Shell.getGetPermissionCommand());
StringTokenizer t =
new StringTokenizer(output, Shell.TOKEN_SEPARATOR_REGEX);
//expected format
//-rw------- 1 username groupname ...
String permission = t.nextToken();
据我所知,Hadoop尝试使用ls -ld
查找特定文件的某些权限,如果我在控制台中使用它,它将完美运行。不幸的是,我还没有发现,它正在寻找哪些文件权限。
Hadoop版本是2.4.1,HBase版本是0.98.4,我使用的是Java-API。其他操作如创建表工作正常。有没有人遇到类似问题或知道该怎么做?
修改 我刚刚发现这是一个与hadoop相关的问题。即使不使用HDFS,制作最简单的MapReduce-Operation也会给我带来同样的例外。
答案 0 :(得分:0)
请检查一下这是否可以解决您的问题。
如果你的是权限问题,那么这就行了。
public static void main(String[] args) {
//set user group information
UserGroupInformation ugi = UserGroupInformation.createRemoteUser("hdfs");
//set privilege exception
ugi.doAs(new PrivilegedExceptionAction<Void>() {
public Void run() throws Exception {
//create configuration object
Configuration config = new Configuration();
config.set("fs.defaultFS", "hdfs://ip:port/");
config.set("hadoop.job.ugi", "hdfs");
FileSystem dfs = FileSystem.get(config);
.
.