运行Hadoop wordcount示例时找不到作业令牌文件

时间:2012-04-24 17:43:42

标签: hadoop cluster-computing word-count

我刚刚在一个小型集群上成功安装了Hadoop。现在我正在尝试运行wordcount示例,但我收到此错误:

****hdfs://localhost:54310/user/myname/test11
12/04/24 13:26:45 INFO input.FileInputFormat: Total input paths to process : 1
12/04/24 13:26:45 INFO mapred.JobClient: Running job: job_201204241257_0003
12/04/24 13:26:46 INFO mapred.JobClient:  map 0% reduce 0%
12/04/24 13:26:50 INFO mapred.JobClient: Task Id : attempt_201204241257_0003_m_000002_0, Status : FAILED
Error initializing attempt_201204241257_0003_m_000002_0:
java.io.IOException: Exception reading file:/tmp/mapred/local/ttprivate/taskTracker/myname/jobcache/job_201204241257_0003/jobToken
    at org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:135)
    at org.apache.hadoop.mapreduce.security.TokenCache.loadTokens(TokenCache.java:165)
    at org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1179)
    at org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1116)
    at org.apache.hadoop.mapred.TaskTracker$5.run(TaskTracker.java:2404)
    at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.FileNotFoundException: File file:/tmp/mapred/local/ttprivate/taskTracker/myname/jobcache/job_201204241257_0003/jobToken does not exist.
    at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:397)
    at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251)
    at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:125)
    at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)
    at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:427)
    at org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:129)
    ... 5 more

任何帮助?

2 个答案:

答案 0 :(得分:2)

我刚刚解决了同样的错误 - 在我的Hadoop目录上递归设置权限没有帮助。在Mohyt的建议here之后,我修改了core-site.xml(在hadoop / conf /目录中)以删除我在XML中指定临时目录(hadoop.tmp.dir的位置)。在允许Hadoop创建自己的临时目录之后,我运行时没有错误。

答案 1 :(得分:0)

最好创建自己的临时目录。

<configuration>
 <property>
 <name>hadoop.tmp.dir</name>
 <value>/home/unmesha/mytmpfolder/tmp</value>
 <description>A base for other temporary directories.</description>
 </property>
.....

并给予许可

unmesha@unmesha-virtual-machine:~$chmod 750 /mytmpfolder/tmp

check this用于core-site.xml配置