为什么Hadoop Streaming无法找到我的脚本?

时间:2015-08-19 01:06:42

标签: perl hadoop

我在Hadoop中传输两个脚本wordCountMap.pl和wordCountReduce.pl,它们应该计算文件中每个单词的出现次数。

但是Hadoop一直在抱怨wordCountMap.pl。我的命令和输出如下。

命令:

hadoop jar /usr/local/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.6.0.jar -input wordCount/words.txt -output output -mapper wordCount/wordCountMap.pl -file wordCount/wordCountMap.pl -reducer wordCount/wordCuntReduce.pl -file wordCount/wordCountReduce.pl

输出:

15/08/18 20:09:50 WARN streaming.StreamJob: -file option is deprecated, please use generic option -files instead.
15/08/18 20:09:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
File: /home/hduser/wordCount/wordCountMap.pl does not exist, or is not readable.
Try -help for more information
Streaming Command Failed!

然而,wordCountMap.pl对我来说很好,就像我输入的那样:

hadoop fs -cat wordCount/wordCountMap.pl

得到了:

15/08/18 20:21:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
    #!/usr/bin/perl -w
    while(<STDIN>) {
        chomp;
        @words = split;
        foreach $w (@words) {
            $key = $w;
            $value = "1";
            print "$key\t$value\n";
        }
    }

有人可以告诉我我的命令有什么问题吗? (我想我们可以安全地忽略上面的WARN消息。)

仅供参考,wordCountReduce.pl是

#!/usr/bin/perl -w
$count = 0;
while(<STDIN>) {
    chomp;
    ($key,$value) = split "\t";

    if (!defined($oldkey)) {
        $oldkey = $key;
        $count  = $value;
    } else {
        if ($oldkey eq $key) {
        $count = $count + $value;
        } else {
        print "$oldkey\t$count\n";
        $oldkey = $key;
        $count  = $value;
        }
    }
}
print "$oldkey\t$count\n";

和words.txt

a a b
b c
a

和#34; hadoop fs -ls wordCount&#34;的结果是

15/08/18 21:27:54 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 3 items
-rw-r--r--   1 hduser supergroup        145 2015-08-18 20:04 wordCount/wordCountMap.pl
-rw-r--r--   1 hduser supergroup        346 2015-08-18 20:04 wordCount/wordCountReduce.pl
-rw-r--r--   1 hduser supergroup         12 2015-08-18 20:04 wordCount/words.txt

提前谢谢!

1 个答案:

答案 0 :(得分:0)

如果您在http://www.michael-noll.com/tutorials/writing-an-hadoop-mapreduce-program-in-python/

上仔细查看了说明
hduser@ubuntu:/usr/local/hadoop$ bin/hadoop jar contrib/streaming/hadoop-*streaming*.jar -file /home/hduser/mapper.py -mapper /home/hduser/mapper.py -file /home/hduser/reducer.py -reducer /home/hduser/reducer.py -input /user/hduser/gutenberg/* -output /user/hduser/gutenberg-output

它清楚地表明没有必要将mapper.py和reducer.py复制到HDFS,你可以链接来自本地文件系统的两个文件:as / path / to / mapper。我相信你可以避免上述错误。