我尝试运行一个Java程序,利用jni在Hadoop 2.3.0中调用GPU程序,但是我收到了以下错误:
java.lang.Exception: java.lang.UnsatisfiedLinkError: affy.qualityControl.PLM.wlsAcc([D[D[DII)V
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: java.lang.UnsatisfiedLinkError: affy.qualityControl.PLM.wlsAcc([D[D[DII)V
at affy.qualityControl.PLM.wlsAcc(Native Method)
at affy.qualityControl.PLM.rlm_fit_anova(PLM.java:141)
at affy.qualityControl.PLM.PLMsummarize(PLM.java:31)
at affy.qualityControl.SummarizePLMReducer.reduce(SummarizePLMReducer.java:59)
at affy.qualityControl.SummarizePLMReducer.reduce(SummarizePLMReducer.java:12)
at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171)
at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
我猜错误是由JNI引起的。我编写了一个小的测试Java代码来通过JNI调用我的GPU代码(wlsAcc),它运行正常。我还创建了我的GPU共享库,每个库都链接在一起。我还在MapReduce代码中添加了以下代码(我的GPU代码在Reducer中调用):
setInputParameters(conf, args);
DistributedCache.createSymlink(conf);
DistributedCache.addCacheFile(new URI("/user/sniu/libjniWrapper.so#libjniWrapper.so"), conf);
conf.set("mapred.reduce.child.java.opts", "-Djava.library.path=.");
我也将libjniWrapper.so复制到/ user / sniu / dir的HDFS。我仍然没有想到为什么hadoop找不到我的原生共享库。有谁知道我的问题在哪里?
答案 0 :(得分:1)
现在问题已解决,问题是对于本机C代码,最初我是这样写的:
JNIEXPORT void JNICALL Java_jniWrapper_wlsAcc
相反,正确的方法应该是:
JNIEXPORT void JNICALL Java_affy_qualityControl_jniWrapper_wlsAcc