hadoop:在伪分布式环境下有多个reducer?

时间:2013-07-17 08:51:12

标签: hadoop mapreduce hadoop-streaming

我是hadoop的新手。我已经在伪分布式模式下成功配置了hadoop设置。我希望有多个Reducer使用选项-D mapred.reduce.tasks=2(使用hadoop-streaming)。但是仍然只有一个减速器。

根据Google的说法,我确信mapred.LocalJobRunner将减速器的数量限制为1.但我想知道是否有更多减速器的解决方法?

我的hadoop配置文件:

[admin@localhost string-count-hadoop]$ cat ~/hadoop-1.1.2/conf/core-site.xml 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://localhost:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/admin/hadoop-data/tmp</value>
    </property>
</configuration>



[admin@localhost string-count-hadoop]$ cat ~/hadoop-1.1.2/conf/mapred-site.xml 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>mapred.job.tracker</name>
        <value>localhost:9001</value>
    </property>
</configuration>



[admin@localhost string-count-hadoop]$ cat ~/hadoop-1.1.2/conf/hdfs-site.xml 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>dfs.name.dir</name>
        <value>/home/admin/hadoop-data/name</value>
    </property>

    <property>
        <name>dfs.data.dir</name>
        <value>/home/admin/hadoop-data/data</value>
    </property>

    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property> 
</configuration>

我开始工作的方式:

[admin@localhost string-count-hadoop]$ cat hadoop-startjob.sh 
#!/bin/sh

~/hadoop-1.1.2/bin/hadoop jar ~/hadoop-1.1.2/contrib/streaming/hadoop-streaming-1.1.2.jar \
        -D mapred.job.name=string-count \
        -D mapred.reduce.tasks=2 \
        -mapper  mapper  \
        -file    mapper  \
        -reducer reducer \
        -file    reducer \
        -input   $1      \
        -output  $2

[admin@localhost string-count-hadoop]$ ./hadoop-startjob.sh /z/programming/testdata/items_sequence /z/output
packageJobJar: [mapper, reducer] [] /tmp/streamjob837249979139287589.jar tmpDir=null
13/07/17 20:21:10 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/07/17 20:21:10 WARN snappy.LoadSnappy: Snappy native library not loaded
13/07/17 20:21:10 INFO mapred.FileInputFormat: Total input paths to process : 1
13/07/17 20:21:11 WARN mapred.LocalJobRunner: LocalJobRunner does not support symlinking into current working dir.
...
...

1 个答案:

答案 0 :(得分:1)

尝试修改core-site.xml属性

<property>
        <name>fs.default.name</name>
        <value>hdfs://localhost:9000</value>
 </property>

要,

<property>
        <name>fs.default.name</name>
        <value>hdfs://localhost:9000/</value>
 </property>

在9000 之后添加额外的 /并重新启动所有守护进程。