无法通过Java客户端获取Hadoop作业信息

时间:2014-03-14 17:56:21

标签: java hadoop

我正在使用Hadoop 1.2.1并尝试通过java客户端打印作业详细信息,但它没有打印任何内容,这里是我的java代码

    Configuration configuration = new Configuration();
    configuration.addResource(new Path("/usr/local/hadoop/conf/core-site.xml"));
    configuration.addResource(new Path("/usr/local/hadoop/conf/hdfs-site.xml"));
    configuration.addResource(new Path("/usr/local/hadoop/conf/mapred-site.xml")); 
    InetSocketAddress jobtracker = new InetSocketAddress("localhost", 54311);
    JobClient jobClient;
    jobClient = new JobClient(jobtracker, configuration);
    jobClient.setConf(configuration);
    JobStatus[] jobs = jobClient.getAllJobs();
    System.out.println(jobs.length);//it is printing 0.
    for (int i = 0; i < jobs.length; i++) {
        JobStatus js = jobs[i];
        JobID jobId = js.getJobID();
        System.out.println(jobId);
    }

但是从工作追踪者的历史来看,我可以看到三份工作。这是屏幕截图 enter image description here 任何人都可以告诉我哪里出错了。我只想打印所有工作细节。

这是我的配置文件:

芯-site.xml中

<configuration>
<property>
<name>hadoop.tmp.dir</name
<value>/data/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system.  A URI whose</description>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system.  A URI whose
scheme and authority determine the FileSystem implementation.  The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class.  The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>

HD​​FS-site.xml中

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.  The actual number of replications can be specified when the file is created. The default is used if replication is not specified in create time.
</description>
</property>
</configuration>

mapred-site.xml中

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs at. If "local", then jobs are run in-process as a single map and reduce task.
</description>
</property>
</configuration>

1 个答案:

答案 0 :(得分:0)

尝试这样的事情

jobClient.displayTasks(jobID, "map", "completed");

作业ID

JobID jobID = new JobID(jobIdentifier, jobNumber);

TaskReport[] taskReportList =   jobClient.getMapTaskReports(jobID);