我的输入文件看起来像这样,已经上传到HDFS / tmp / input(在^ A中分隔,这是一个非打印字符,这是VI中的视图)
A^A10
A^A7
A^A10
A^A5
A^A10
A^A8
B^A1
A^A9
B^A1
A^A9
B^A1
A^A9
B^A1
A^A9
B^A1
A^A9
B^A1
A^A9
我写的映射器看起来像这样:
import sys
for line in sys.stdin:
name, score = line.strip().split(chr(1))
print '\t'.join([name, str(int(score)+1)])
reducer看起来像这样(similar to):
import sys
from datetime import datetime
def calc(inputList):
return min(inputList)
def main():
current_key = None
value_list = []
key = None
value = None
result = None
for line in sys.stdin:
try:
line = line.strip()
key, value = line.split('\t', 1)
try:
value = eval(value)
except:
continue
if current_key == key:
value_list.append(value)
else:
if current_key:
try:
result = str(calc(value_list))
except:
pass
print '%s\t%s' % (current_key, result )
value_list = [value]
current_key = key
except:
pass
print '%s\t%s' % (current_key, str(calc(value_list)))
if __name__ == '__main__':
main()
我在shell中测试了mapper和reducer,它对我有用:
$ cat input | python mapper.py | sort -t$'\t' -k1 | python reducer.py
A 6
B 2
但我使用hadoop流式传输无法实现它:
/usr/bin/hadoop
jar /opt/cloudera/parcels/CDH-4.3.0-1.cdh4.3.0.p0.22/lib/hadoop-0.20-mapreduce/contrib/streaming/hadoop-streaming-2.0.0-mr1-cdh4.3.0.jar
-file mapper.py
-mapper mapper.py
-file reducer.py
-reducer reducer.py
-input /tmp/input
-output /tmp/output
错误输出如下所示:
13/10/07 15:59:02 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
13/10/07 15:59:02 INFO mapred.FileInputFormat: Total input paths to process : 1
13/10/07 15:59:02 INFO streaming.StreamJob: getLocalDirs(): [/tmp/hadoop-a59347/mapred/local]
13/10/07 15:59:02 INFO streaming.StreamJob: Running job: job_201309301959_0089
13/10/07 15:59:02 INFO streaming.StreamJob: To kill this job, run:
13/10/07 15:59:02 INFO streaming.StreamJob: UNDEF/bin/hadoop job -Dmapred.job.tracker=url1:8021 -kill job_201309301959_0089
13/10/07 15:59:02 INFO streaming.StreamJob: Tracking URL: http://url1:50030/jobdetails.jsp?jobid=job_201309301959_0089
13/10/07 15:59:03 INFO streaming.StreamJob: map 0% reduce 0%
13/10/07 15:59:10 INFO streaming.StreamJob: map 50% reduce 0%
13/10/07 16:00:10 INFO streaming.StreamJob: map 100% reduce 0%
13/10/07 16:00:26 INFO streaming.StreamJob: map 100% reduce 1%
13/10/07 16:00:32 INFO streaming.StreamJob: map 100% reduce 2%
13/10/07 16:00:37 INFO streaming.StreamJob: map 100% reduce 100%
13/10/07 16:00:37 INFO streaming.StreamJob: To kill this job, run:
13/10/07 16:00:37 INFO streaming.StreamJob: UNDEF/bin/hadoop job -Dmapred.job.tracker=url1:8021 -kill job_201309301959_0089
13/10/07 16:00:37 INFO streaming.StreamJob: Tracking URL: http://url1:50030/jobdetails.jsp?jobid=job_201309301959_0089
13/10/07 16:00:37 ERROR streaming.StreamJob: Job not successful. Error: NA
13/10/07 16:00:37 INFO streaming.StreamJob: killJob...
Streaming Command Failed!
知道我哪里做错了吗?
答案 0 :(得分:7)
Hadoop框架不知道如何运行mapper和reducer。有两种可能的解决方法:
FIX 1:显式调用python。
-mapper "python mapper.py" -reducer "python reducer.py"
FIX 2:告诉Hadoop在哪里可以找到python解释器。为此,您需要在*.py
文件的顶行明确告诉它在哪里找到它。例如:
#!/usr/bin/env python
但请注意,python
并非始终位于/usr/bin
(请参阅下面的copumpkin评论)。