多节点Hadoop从节点无法访问主节点上的jar文件

时间:2013-12-09 18:24:28

标签: hadoop jar hadoop-streaming

我正在使用流式传输来调用jar文件来执行某些任务,如:

hadoop jar /path/to/hadoop-streaming.jar -input /inDir -ouput /outDir -file jarscript.sh -mapper jarscript.sh

其中jarscript.sh是:

java -jar /path/to/jar/X.jar -arguments

当我运行流命令时,它在主节点上工作正常,只是我在从属节点上遇到无法访问X.jar的错误。我该如何改变?如何在从属节点上启用对jar文件的访问?他们需要一个特定的位置,以便从属节点访问jar文件吗?

我使用了Michael Noll的教程,所以hadoop在hduser上,jar文件在另一个用户的空间 - hadoopmaster,所以路径更像是/home/hadoopmaster/path/to/jar/X.jar。这可能是一个问题吗?

所以,我尝试了唐纳德这样做的方式,而奴隶仍然给我这个错误:

stderr logs

Unable to access jarfile /home/hadoopmaster/Downloads/PaDEL-Descriptor/PaDEL-Descriptor.jar
java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:362)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:576)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:135)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)



syslog logs

2013-12-09 11:18:13,183 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library
2013-12-09 11:18:13,338 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /app/hadoop/tmp/mapred/local/taskTracker/hduser/jobcache/job_201312091116_0001/jars/META-INF <- /app/hadoop/tmp/mapred/local/taskTracker/hduser/jobcache/job_201312091116_0001/attempt_201312091116_0001_m_000000_1/work/META-INF
2013-12-09 11:18:13,351 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /app/hadoop/tmp/mapred/local/taskTracker/hduser/jobcache/job_201312091116_0001/jars/jarscript.sh <- /app/hadoop/tmp/mapred/local/taskTracker/hduser/jobcache/job_201312091116_0001/attempt_201312091116_0001_m_000000_1/work/jarscript.sh
2013-12-09 11:18:13,359 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /app/hadoop/tmp/mapred/local/taskTracker/hduser/jobcache/job_201312091116_0001/jars/org <- /app/hadoop/tmp/mapred/local/taskTracker/hduser/jobcache/job_201312091116_0001/attempt_201312091116_0001_m_000000_1/work/org
2013-12-09 11:18:13,371 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /app/hadoop/tmp/mapred/local/taskTracker/hduser/jobcache/job_201312091116_0001/jars/lib <- /app/hadoop/tmp/mapred/local/taskTracker/hduser/jobcache/job_201312091116_0001/attempt_201312091116_0001_m_000000_1/work/lib
2013-12-09 11:18:13,386 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /app/hadoop/tmp/mapred/local/taskTracker/hduser/jobcache/job_201312091116_0001/jars/.job.jar.crc <- /app/hadoop/tmp/mapred/local/taskTracker/hduser/jobcache/job_201312091116_0001/attempt_201312091116_0001_m_000000_1/work/.job.jar.crc
2013-12-09 11:18:13,387 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /app/hadoop/tmp/mapred/local/taskTracker/hduser/jobcache/job_201312091116_0001/jars/job.jar <- /app/hadoop/tmp/mapred/local/taskTracker/hduser/jobcache/job_201312091116_0001/attempt_201312091116_0001_m_000000_1/work/job.jar
2013-12-09 11:18:13,598 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists!
2013-12-09 11:18:13,691 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
2013-12-09 11:18:13,695 INFO org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@31598bd
2013-12-09 11:18:13,770 INFO org.apache.hadoop.mapred.MapTask: Processing split: hdfs://hadoopmaster:54310/BiolData/input/biolink1/compound_id_464726_2d_3D.sdf:0+13142
2013-12-09 11:18:13,785 WARN org.apache.hadoop.io.compress.snappy.LoadSnappy: Snappy native library not loaded
2013-12-09 11:18:13,792 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 1
2013-12-09 11:18:13,800 INFO org.apache.hadoop.mapred.MapTask: io.sort.mb = 100
2013-12-09 11:18:13,840 INFO org.apache.hadoop.mapred.MapTask: data buffer = 79691776/99614720
2013-12-09 11:18:13,840 INFO org.apache.hadoop.mapred.MapTask: record buffer = 262144/327680
2013-12-09 11:18:13,854 INFO org.apache.hadoop.streaming.PipeMapRed: PipeMapRed exec [/app/hadoop/tmp/mapred/local/taskTracker/hduser/jobcache/job_201312091116_0001/attempt_201312091116_0001_m_000000_1/work/./jarscript.sh]
2013-12-09 11:18:13,873 INFO org.apache.hadoop.streaming.PipeMapRed: MRErrorThread done
2013-12-09 11:18:13,877 INFO org.apache.hadoop.streaming.PipeMapRed: R/W/S=1/0/0 in:NA [rec/s] out:NA [rec/s]
2013-12-09 11:18:13,878 INFO org.apache.hadoop.streaming.PipeMapRed: R/W/S=10/0/0 in:NA [rec/s] out:NA [rec/s]
2013-12-09 11:18:13,879 INFO org.apache.hadoop.streaming.PipeMapRed: R/W/S=100/0/0 in:NA [rec/s] out:NA [rec/s]
2013-12-09 11:18:13,883 WARN org.apache.hadoop.streaming.PipeMapRed: java.io.IOException: Stream closed
at java.lang.ProcessBuilder$NullOutputStream.write(ProcessBuilder.java:434)
at java.io.OutputStream.write(OutputStream.java:116)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:569)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:135)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)

2013-12-09 11:18:13,883 INFO org.apache.hadoop.streaming.PipeMapRed: PipeMapRed failed!
2013-12-09 11:18:13,913 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
2013-12-09 11:18:13,947 INFO org.apache.hadoop.io.nativeio.NativeIO: Initialized cache for UID to User mapping with a cache timeout of 14400 seconds.
2013-12-09 11:18:13,947 INFO org.apache.hadoop.io.nativeio.NativeIO: Got UserName hduser for UID 1001 from the native implementation
2013-12-09 11:18:13,950 WARN org.apache.hadoop.mapred.Child: Error running child
java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:362)
at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:576)
at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:135)
at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:36)
at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
2013-12-09 11:18:13,953 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task

2 个答案:

答案 0 :(得分:3)

问题是你的X.jar可能不在每个节点上......对吗?因此,您需要在作业运行时随处运送它。

您可以将X.jar添加到-file参数,如下所示:

hadoop jar /path/to/hadoop-streaming.jar
       -input /inDir -ouput /outDir
       -file jarscript.sh
       -file /path/to/jar/X.jar
       -mapper jarscript.sh

当您运行作业时,它将jar文件发送到每个节点(就像它发送jarscript.sh的方式一样)。

请注意,您不应再在shell脚本中使用绝对路径。 X.jar将位于shell脚本的当前工作目录中,因此您应将脚本更改为java -jar X.jar -arguments

答案 1 :(得分:2)

这样的事情怎么样?

首先,在开始作业之前,将jar放在HDFS中的某个已知位置,例如:

hadoop fs -put /path/to/jar/X.jar /lib

然后在jarscript.sh中添加一行,首先从HDFS中复制该jar,如下所示:

hadoop fs -get /lib/X.jar .
java -jar ./X.jar -arguments

这有点像黑客,但我认为如果你不能通过唐纳德建议工作的“-file”参数获得“正确”的方式,它应该可行。