无法阻止本地作业跑步者跑步

时间:2015-08-05 19:22:32

标签: hadoop hbase

我试图在Hadoop-1上使用HTable和LoadIncrementalHFiles从java程序中填充hbase表。

我有一个完全分布式的3节点集群,其中包含1个主服务器和2个从服务器。

Namenode,jobtracker在master和3个数据节点上运行,在所有3个节点上运行3个tasktrackers。

3个节点上的3名动物园管理员。

主节点上的HMaster和所有3个节点上的3个区域服务器。

我的core-site.xml包含:

<property>
  <name>hadoop.tmp.dir</name>
  <value>/usr/local/hadoop/TMPDIR/</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://master:54310/</value>
</property>

mapred-site.xml包含:

<property>
  <name>mapred.job.tracker</name>
  <value>master:54311</value>
</property>

但是,当我运行该程序时,它会给我以下错误:

15/08/06 00:11:14 INFO mapred.TaskRunner: Creating symlink: /usr/local/hadoop/TMPDIR/mapred/local/archive/328189779182527451_-1963144838_2133510842/192.168.72.1/user/hduser/partitions_736cc0de-3c15-4a3d-8ae3-e4d239d73f93 <- /usr/local/hadoop/TMPDIR/mapred/local/localRunner/_partition.lst
15/08/06 00:11:14 WARN fs.FileUtil: Command 'ln -s /usr/local/hadoop/TMPDIR/mapred/local/archive/328189779182527451_-1963144838_2133510842/192.168.72.1/user/hduser/partitions_736cc0de-3c15-4a3d-8ae3-e4d239d73f93 /usr/local/hadoop/TMPDIR/mapred/local/localRunner/_partition.lst' failed 1 with: ln: failed to create symbolic link `/usr/local/hadoop/TMPDIR/mapred/local/localRunner/_partition.lst': No such file or directory
15/08/06 00:11:14 WARN mapred.TaskRunner: Failed to create symlink: /usr/local/hadoop/TMPDIR/mapred/local/archive/328189779182527451_-1963144838_2133510842/192.168.72.1/user/hduser/partitions_736cc0de-3c15-4a3d-8ae3-e4d239d73f93 <- /usr/local/hadoop/TMPDIR/mapred/local/localRunner/_partition.lst
15/08/06 00:11:14 INFO mapred.JobClient: Running job: job_local_0001
15/08/06 00:11:15 INFO util.ProcessTree: setsid exited with exit code 0
15/08/06 00:11:15 INFO mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@35506f5f
15/08/06 00:11:15 INFO mapred.MapTask: io.sort.mb = 100
15/08/06 00:11:15 INFO mapred.JobClient:  map 0% reduce 0%
15/08/06 00:11:17 INFO mapred.MapTask: data buffer = 79691776/99614720
15/08/06 00:11:17 INFO mapred.MapTask: record buffer = 262144/327680
15/08/06 00:11:17 WARN mapred.LocalJobRunner: job_local_0001
java.lang.IllegalArgumentException: Can't read partitions file
     at org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:116)
     at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
     at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
     at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:677)
     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:756)
     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
     at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:214)
Caused by: java.io.FileNotFoundException: File _partition.lst does not exist.
     at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:397)
     at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:251)
     at org.apache.hadoop.fs.FileSystem.getLength(FileSystem.java:796)
     at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1479)
     at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1474)
     at org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner.readPartitions(TotalOrderPartitioner.java:301)
     at org.apache.hadoop.mapreduce.lib.partition.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:88)
     ... 6 more

我的代码中的几行:

    Path input = new Path(args[0]);
    input = input.makeQualified(input.getFileSystem(conf));
    Path partitionFile = new Path(input, "_partitions.lst");
    TotalOrderPartitioner.setPartitionFile(conf, partitionFile);
    InputSampler.Sampler<IntWritable, Text> sampler = new InputSampler.RandomSampler<IntWritable, Text>(0.1, 100);
    InputSampler.writePartitionFile(job, sampler);
    job.setNumReduceTasks(2);
    job.setPartitionerClass(TotalOrderPartitioner.class);

    job.setJarByClass(TextToHBaseTransfer.class);       

为什么它仍在运行本地作业运行器并给我“无法读取分区文件”?

群集配置中缺少什么?

0 个答案:

没有答案