我是使用Hadoop流式传输Python的新手。我成功地运行了大多数参考文献中解释的wordcount示例。但是,当我开始使用我自己编写的一个小python脚本时,它显示错误,即使代码的功能几乎没有。
执行命令时的错误部分是:
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
14/12/13 01:47:31 INFO mapred.LocalJobRunner: map task executor complete.
14/12/13 01:47:31 WARN mapred.LocalJobRunner: job_local174189774_0001
java.lang.Exception: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 2
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 2
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:320)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:533)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
14/12/13 01:47:32 INFO mapreduce.Job: Job job_local174189774_0001 failed with state FAILED due to: NA
14/12/13 01:47:32 INFO mapreduce.Job: Counters: 0
14/12/13 01:47:32 ERROR streaming.StreamJob: Job not Successful!
Streaming Command Failed!
map.py文件如下:
import sys
for line in sys.stdin:
line = line.strip()
review_lines = line.split('\n')
for r in review_lines:
review = r.split('\t')
print '%s\t%s' % (review[0], review[1])
red.py文件如下:
import sys
for line in sys.stdin:
line = line.strip()
word = line.split('\t')
print '%s\t%d' %(word[0], int(word[1]) % 2)
我提供的输入是:(input_file.txt)
R1 1
R2 5
R3 3
R4 2
用于运行该过程的命令是:
hadoop jar /usr/local/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.4.0.jar -file /home/hduser/map.py -mapper /home/hduser/map.py -file /home/hduser/red.py -reducer /home/hduser/red.py -input /user/hduser/input_file.txt -output /user/hduser/output_file.txt
答案 0 :(得分:3)
您可以尝试将它放在脚本的顶部:
#!/usr/bin/env python
答案 1 :(得分:0)
#!/usr/bin/python
为我工作。