我需要在hbase中读取图像并转换为opencv
mat进行面部检测。
我的代码如下
public static class FaceCountMapper extends TableMapper<Text, Text> {
private CascadeClassifier faceDetector;
public void setup(Context context) throws IOException, InterruptedException {
if (context.getCacheFiles() != null && context.getCacheFiles().length > 0) {
URI mappingFileUri = context.getCacheFiles()[0];
if (mappingFileUri != null) {
System.out.println(mappingFileUri);
faceDetector = new CascadeClassifier(mappingFileUri.toString());
}
}
super.setup(context);
} // setup()
public ArrayList<Object> detectFaces(Mat image, String file_name) {
ArrayList<Object> facemap = new ArrayList<Object>();
MatOfRect faceDetections = new MatOfRect();
faceDetector.detectMultiScale(image, faceDetections);
System.out.println(String.format("Detected %s faces", faceDetections.toArray().length));
output.put(faceDetections.toArray().length);
facemap.add(output);
}
return facemap;
}
public void map(ImmutableBytesWritable row, Result result, Context context)
throws InterruptedException, IOException {
String file_name = Bytes.toString(result.getValue(Bytes.toBytes("Filename"), Bytes.toBytes("data")));
String mimetype = Bytes.toString(result.getValue(Bytes.toBytes("mime"), Bytes.toBytes("data")));
byte[] image_data = result.getValue(Bytes.toBytes("Data"), Bytes.toBytes("data"));
BufferedImage bi = ImageIO.read(new ByteArrayInputStream(image_data));
Mat mat = new Mat(bi.getHeight(), bi.getWidth(), CvType.CV_8UC3);
mat.put(0, 0, image_data);
detectFaces(mat, file_name);
}
作业配置如下
Configuration conf = this.getConf();
conf.set("hbase.master", "101.192.0.122:16000");
conf.set("hbase.zookeeper.quorum", "101.192.0.122");
conf.setInt("hbase.zookeeper.property.clientPort", 2181);
conf.set("zookeeper.znode.parent", "/hbase-unsecure");
// Initialize and configure MapReduce job
Job job = Job.getInstance(conf);
job.setJarByClass(FaceCount3.class);
job.setMapperClass(FaceCountMapper.class);
job.getConfiguration().set("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
job.getConfiguration().set("fs.file.impl", org.apache.hadoop.fs.LocalFileSystem.class.getName());
Scan scan = new Scan();
scan.setCaching(500); // 1 is the default in Scan, which will be bad for
// MapReduce jobs
scan.setCacheBlocks(false); // don't set to true for MR jobs
TableMapReduceUtil.initTableMapperJob("Image", // input HBase table name
scan, // Scan instance to control CF and attribute selection
FaceCountMapper.class, // mapper
null, // mapper output key
null, // mapper output value
job);
job.setOutputFormatClass(NullOutputFormat.class); // because we aren't
// emitting anything
// from mapper
job.addCacheFile(new URI("/user/hduser/haarcascade_frontalface_alt.xml"));
job.addFileToClassPath(new Path("/user/hduser/hipi-2.1.0.jar"));
job.addFileToClassPath(new Path("/user/hduser/javacpp.jar"));
DistributedCache.addFileToClassPath(new Path("/user/hduser/haarcascade_frontalface_alt.xml"), conf);
conf.set("mapred.job.tracker", "local");
// Execute the MapReduce job and block until it complets
boolean success = job.waitForCompletion(true);
// Return success or failure
return success ? 0 : 1;
跑步时我正在
java.lang.Exception:java.lang.UnsatisfiedLinkError: org.opencv.objdetect.CascadeClassifier.CascadeClassifier_1(Ljava /郎/字符串;).J
错误。
但Opencv.jar在hadoop_classpath
中提供答案 0 :(得分:2)
当应用程序尝试加载Linux中的.so
,Windows上的.dll
或Mac中的.dylib
并且该库不存在时,会引发UnsatisfiedLinkError。具体来说,为了找到所需的本机库,JVM同时查找PATH环境变量和java.library.path
系统属性。
此外,如果您的应用程序已经加载了库并且应用程序尝试再次加载它,则JVM将抛出UnsatisfiedLinkError
。此外,您必须验证本机库存在于应用程序的java.library.path或PATH环境库中。如果仍然找不到库,请尝试提供System.loadLibrary方法的绝对路径。
在你的情况下,请从调用者尝试以下方法,看看什么是类路径元素。
/**
* Method printClassPathResources.
*/
public static void printClassPathResources() {
final ClassLoader cl = ClassLoader.getSystemClassLoader();
final URL[] urls = ((URLClassLoader) cl).getURLs();
LOG.info("Print All Class path resources under currently running class");
for (final URL url : urls) {
LOG.info(url.getFile());
}
}
根据这些输入,您可以调整类路径条目(在本例中为opencv jar或其他内容)并查看是否正常工作。