我刚刚开始学习hadoop 1.1.2。
当我开始运行WordCount案例时,两种代码都很好。
命令A:
hadoop jar /usr/local/hadoop/hadoop-examples-1.1.2.jar WordCount input output
命令B:
hadoop jar /usr/local/hadoop/hadoop-examples-1.1.2.jar wordcount input output
唯一的区别是主要类名wordcount
。
所以我的问题是天气主要类名wordcount
是否不区分大小写?
更新:
@Amar说WordCount
不能正常工作,我已经检查过他是对的。我被文件here误导了。官方文件需要更新。
但我仍然不知道为什么它必须是wordcount
。
答案 0 :(得分:3)
尝试在没有wordcount
的情况下投放,例如:
hadoop jar /usr/local/hadoop/hadoop-examples-1.1.2.jar input output
您将收到以下内容:
Unknown program 'input' chosen.
Valid program names are:
aggregatewordcount: An Aggregate based map/reduce program that counts the words in the input files.
aggregatewordhist: An Aggregate based map/reduce program that computes the histogram of the words in the input files.
dbcount: An example job that count the pageview counts from a database.
grep: A map/reduce program that counts the matches of a regex in the input.
join: A job that effects a join over sorted, equally partitioned datasets
multifilewc: A job that counts words from several files.
pentomino: A map/reduce tile laying program to find solutions to pentomino problems.
pi: A map/reduce program that estimates Pi using monte-carlo method.
randomtextwriter: A map/reduce program that writes 10GB of random textual data per node.
randomwriter: A map/reduce program that writes 10GB of random data per node.
secondarysort: An example defining a secondary sort to the reduce.
sleep: A job that sleeps at each map and reduce task.
sort: A map/reduce program that sorts the data written by the random writer.
sudoku: A sudoku solver.
teragen: Generate data for the terasort
terasort: Run the terasort
teravalidate: Checking results of terasort
wordcount: A map/reduce program that counts the words in the input files.
所以基本上第一个参数不是主类名,而是你要运行的示例程序名。
所以,它甚至不应该接受WordCount
,它不适合我。
以下命令与上面显示的结果相同:
bin/hadoop jar hadoop-examples-1.0.4.jar WordCount LICENSE.txt output
供您参考:主要类已在jar中包含的META-INF/MANIFEST.MF
文件中定义:
Main-Class: org/apache/hadoop/examples/ExampleDriver
答案 1 :(得分:0)
它肯定区分大小写,因为它试图从jar中加载类WordCount
或wordcount
,具体取决于外壳。由于Java在这方面是区分大小写的,hadoop jar
也是如此。