确定映射器的执行

时间:2013-04-22 10:45:31

标签: java linux hadoop mapreduce mapper

是否执行了定义mapper的帮助,如果没有执行,可能会出现什么原因。我将读取方式的输出从数据库写入到执行映射器的本地文件系统的文本文件中。我在这里给出一个代码

package org.myorg;

import java.io.*;
import java.util.*;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.logging.Level;
import org.apache.hadoop.fs.*;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;


public class ParallelIndexation {

    public static class Map extends MapReduceBase implements
            Mapper<LongWritable, Text, Text, LongWritable> {
        private final static LongWritable zero = new LongWritable(0);
        private Text word = new Text();


        public void map(LongWritable key, Text value,
                OutputCollector<Text, LongWritable> output, Reporter reporter)
                throws IOException {

            Configuration conf = new Configuration();
            int CountComputers;
            FileInputStream fstream = new FileInputStream(
                    "/export/hadoop-1.0.1/bin/countcomputers.txt");
            BufferedReader br = new BufferedReader(new InputStreamReader(fstream));
            String result=br.readLine();
            CountComputers=Integer.parseInt(result);
            input.close();
            fstream.close();
            Connection con = null;
            Statement st = null;
                ResultSet rs = null;    
                String url = "jdbc:postgresql://192.168.1.8:5432/NexentaSearch";
                String user = "postgres";
                String password = "valter89";
            ArrayList<String> paths = new ArrayList<String>();
            try
            {
                con = DriverManager.getConnection(url, user, password);
                        st = con.createStatement();
                        rs = st.executeQuery("select path from tasks order by id");
                while (rs.next()) { paths.add(rs.getString(1)); };
                PrintWriter zzz = null;
                    try
                    {
                            zzz = new PrintWriter(new FileOutputStream("/export/hadoop-1.0.1/bin/readwaysfromdatabase.txt"));
                    }
                    catch(FileNotFoundException e)
                    {
                            System.out.println("Error");
                            System.exit(0);
                    }
                    for (int i=0; i<paths.size(); i++)
                {
                    zzz.println("paths[i]=" + paths.get(i) + "\n");
                    }
                    zzz.close();
            }
            catch (SQLException e) 
            {
                System.out.println("Connection Failed! Check output console");
                e.printStackTrace();
            }

但是,尽管它位于其中一个从属节点上,但未创建/export/hadoop-1.0.1/bin/readwaysfromdatabase.txt文件。是否从这里开始,什么映射器都没有被执行?我还将一个输出带入程序执行文件

args[0]=/export/hadoop-1.0.1/bin/input
13/04/22 14:00:53 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/04/22 14:00:53 INFO mapred.FileInputFormat: Total input paths to process : 0
13/04/22 14:00:54 INFO mapred.JobClient: Running job: job_201304221331_0003
13/04/22 14:00:55 INFO mapred.JobClient:  map 0% reduce 0%
13/04/22 14:01:12 INFO mapred.JobClient:  map 0% reduce 100%
13/04/22 14:01:17 INFO mapred.JobClient: Job complete: job_201304221331_0003
13/04/22 14:01:17 INFO mapred.JobClient: Counters: 15
13/04/22 14:01:17 INFO mapred.JobClient:   Job Counters 
13/04/22 14:01:17 INFO mapred.JobClient:     Launched reduce tasks=1
13/04/22 14:01:17 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=9079
13/04/22 14:01:17 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
13/04/22 14:01:17 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
13/04/22 14:01:17 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=7983
13/04/22 14:01:17 INFO mapred.JobClient:   File Output Format Counters 
13/04/22 14:01:17 INFO mapred.JobClient:     Bytes Written=0
13/04/22 14:01:17 INFO mapred.JobClient:   FileSystemCounters
13/04/22 14:01:17 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=21536
13/04/22 14:01:17 INFO mapred.JobClient:   Map-Reduce Framework
13/04/22 14:01:17 INFO mapred.JobClient:     Reduce input groups=0
13/04/22 14:01:17 INFO mapred.JobClient:     Combine output records=0
13/04/22 14:01:17 INFO mapred.JobClient:     Reduce shuffle bytes=0
13/04/22 14:01:17 INFO mapred.JobClient:     Reduce output records=0
13/04/22 14:01:17 INFO mapred.JobClient:     Spilled Records=0
13/04/22 14:01:17 INFO mapred.JobClient:     Total committed heap usage (bytes)=16252928
13/04/22 14:01:17 INFO mapred.JobClient:     Combine input records=0
13/04/22 14:01:17 INFO mapred.JobClient:     Reduce input records=0

我还将输出带到一个虚拟机上成功执行程序的文件

12/10/28 10:41:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
12/10/28 10:41:14 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/10/28 10:41:14 INFO mapred.FileInputFormat: Total input paths to process : 1
12/10/28 10:41:15 INFO mapred.JobClient: Running job: job_local_0001
12/10/28 10:41:15 INFO mapred.Task:  Using ResourceCalculatorPlugin : null
12/10/28 10:41:15 INFO mapred.MapTask: numReduceTasks: 1
12/10/28 10:41:15 INFO mapred.MapTask: io.sort.mb = 100
12/10/28 10:41:15 INFO mapred.MapTask: data buffer = 79691776/99614720
12/10/28 10:41:15 INFO mapred.MapTask: record buffer = 262144/327680
12/10/28 10:41:15 INFO mapred.MapTask: Starting flush of map output
12/10/28 10:41:16 INFO mapred.JobClient:  map 0% reduce 0%
12/10/28 10:41:17 INFO mapred.MapTask: Finished spill 0
12/10/28 10:41:17 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
12/10/28 10:41:18 INFO mapred.LocalJobRunner: file:/export/hadoop-1.0.1/bin/input/paths.txt:0+156
12/10/28 10:41:18 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
12/10/28 10:41:18 INFO mapred.Task:  Using ResourceCalculatorPlugin : null
12/10/28 10:41:18 INFO mapred.LocalJobRunner: 
12/10/28 10:41:18 INFO mapred.Merger: Merging 1 sorted segments
12/10/28 10:41:18 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 199 bytes
12/10/28 10:41:18 INFO mapred.LocalJobRunner: 
12/10/28 10:41:19 INFO mapred.JobClient:  map 100% reduce 0%
12/10/28 10:41:19 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
12/10/28 10:41:19 INFO mapred.LocalJobRunner: 
12/10/28 10:41:19 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now
12/10/28 10:41:19 INFO mapred.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to file:/export/hadoop-1.0.1/bin/output
12/10/28 10:41:21 INFO mapred.LocalJobRunner: reduce > reduce
12/10/28 10:41:21 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done.
12/10/28 10:41:22 INFO mapred.JobClient:  map 100% reduce 100%
12/10/28 10:41:22 INFO mapred.JobClient: Job complete: job_local_0001
12/10/28 10:41:22 INFO mapred.JobClient: Counters: 18
12/10/28 10:41:22 INFO mapred.JobClient:   File Input Format Counters 
12/10/28 10:41:22 INFO mapred.JobClient:     Bytes Read=156
12/10/28 10:41:22 INFO mapred.JobClient:   File Output Format Counters 
12/10/28 10:41:22 INFO mapred.JobClient:     Bytes Written=177
12/10/28 10:41:22 INFO mapred.JobClient:   FileSystemCounters
12/10/28 10:41:22 INFO mapred.JobClient:     FILE_BYTES_READ=9573
12/10/28 10:41:22 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=73931
12/10/28 10:41:22 INFO mapred.JobClient:   Map-Reduce Framework
12/10/28 10:41:22 INFO mapred.JobClient:     Reduce input groups=4
12/10/28 10:41:22 INFO mapred.JobClient:     Map output materialized bytes=203
12/10/28 10:41:22 INFO mapred.JobClient:     Combine output records=4
12/10/28 10:41:22 INFO mapred.JobClient:     Map input records=1
12/10/28 10:41:22 INFO mapred.JobClient:     Reduce shuffle bytes=0
12/10/28 10:41:22 INFO mapred.JobClient:     Reduce output records=4
12/10/28 10:41:22 INFO mapred.JobClient:     Spilled Records=8
12/10/28 10:41:22 INFO mapred.JobClient:     Map output bytes=189
12/10/28 10:41:22 INFO mapred.JobClient:     Total committed heap usage (bytes)=321527808
12/10/28 10:41:22 INFO mapred.JobClient:     Map input bytes=156
12/10/28 10:41:22 INFO mapred.JobClient:     Combine input records=0
12/10/28 10:41:22 INFO mapred.JobClient:     Map output records=4
12/10/28 10:41:22 INFO mapred.JobClient:     SPLIT_RAW_BYTES=98
12/10/28 10:41:22 INFO mapred.JobClient:     Reduce input records=0

@ChrisWhite我在命令

的帮助下运行了programm
./hadoop jar /export/hadoop-1.0.1/bin/ParallelIndexation.jar org.myorg.ParallelIndexation /export/hadoop-1.0.1/bin/input /export/hadoop-1.0.1/bin/output -D mapred.map.tasks=1 1> resultofexecute.txt 2&>1 

我有一个集群4个节点,其中一个主体,一个用于secondarynamenode和2个下属。

1 个答案:

答案 0 :(得分:0)

为您的工作安排了多少个地图任务,您的群集有多大?如果说你的工作只运行了4个地图任务和一个包含32个节点的集群,那么很可能28/32个节点没有任何输出(因为没有地图任务在这些节点上运行)。

您可以查看有关构成作业的地图任务的数量以及计划通过Job Tracker Web UI运行这些作业的位置的信息。

奇怪的是,您的第一次运行转储并未显示已启动的任何地图作业,只是减少了任务:

13/04/22 14:01:17 INFO mapred.JobClient:     Launched reduce tasks=1

并且地图输入/输出记录也没有计数器,因此您运行此作业的方式有些奇怪 - 您是否可以共享用于启动作业的完整命令行以及可能的驱动程序配置和运行作业的代码?