执行作业时出现Hadoop错误

时间:2012-06-30 09:31:34

标签: java exception hadoop

我尝试运行一个示例并获得以下输出:

12/06/30 12:27:39 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
12/06/30 12:27:39 INFO input.FileInputFormat: Total input paths to process : 7
12/06/30 12:27:40 INFO mapred.JobClient: Running job: job_local_0001
12/06/30 12:27:40 INFO input.FileInputFormat: Total input paths to process : 7
12/06/30 12:27:40 INFO mapred.MapTask: io.sort.mb = 100
12/06/30 12:27:41 INFO mapred.MapTask: data buffer = 79691776/99614720
12/06/30 12:27:41 INFO mapred.MapTask: record buffer = 262144/327680
12/06/30 12:27:41 INFO mapred.JobClient:  map 0% reduce 0%
12/06/30 12:27:41 INFO mapred.MapTask: Starting flush of map output
12/06/30 12:27:41 WARN mapred.LocalJobRunner: job_local_0001
java.io.IOException: Expecting a line not the end of stream
    at org.apache.hadoop.fs.DF.parseExecResult(DF.java:109)
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:179)
    at org.apache.hadoop.util.Shell.run(Shell.java:134)
    at org.apache.hadoop.fs.DF.getAvailable(DF.java:73)
    at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:329)
    at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:124)
    at org.apache.hadoop.mapred.MapOutputFile.getSpillFileForWrite(MapOutputFile.java:107)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1221)
    at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1129)
    at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:549)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:623)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
12/06/30 12:27:42 INFO mapred.JobClient: Job complete: job_local_0001
12/06/30 12:27:42 INFO mapred.JobClient: Counters: 0

有谁知道我为什么会收到这个错误? Hadoop版本为0.20.2。

1 个答案:

答案 0 :(得分:3)

显然你需要在你日食的机器上使用df命令。在我的情况下,我有2个ubuntu虚拟机(充当主机和从机),并使用windows中的hadoop插件运行eclipse。安装cygwin并将其添加到路径后,它不再出现该错误。