我试图通过使用JNI直接从视频文件中读取帧并在本机代码中解码帧,将它们作为原始3字节BGR数组传回。我还使用了jvmti' SetTag
和GetTag
来标记""具有相应struct
的Java对象,指向long long
的指针。
我在使用JNI / JVMTI和线程方面遇到了一个小问题。如果在对我的本机代码(read()
)的调用之间启动一个线程,则回调到Java会导致数组的长度为0,而在本机端,该数组仍然保持它的正常大小。我假设它与JNIEnv*
有关,因为JNIEnv
是特定于线程的,但是,我不知道如何启动一个空线程会影响这,我也找不到支持我假设的证据。
以下代码会导致问题:
package ga.nurupeaches.imgmap.natives;
import ga.nurupeaches.imgmap.utils.YTRegexHelper;
import java.io.IOException;
public class NativeVideoTest {
public static final int WIDTH = 0, HEIGHT = 0;
public static final String videoPath = YTRegexHelper.getDirectLinks("rnQBF2CIygg").get(0);
private NativeVideo video;
public static void main(String[] args) throws Exception {
System.load("/home/tsunko/Gunvarrel/ImgMap-rw/src/main/cplusplus/libNativeVideo.so");
NativeVideo.initialize(DebugCallbackHandler.class);
NativeVideoTest test = new NativeVideoTest();
test.nativeWork();
}
public NativeVideoTest() throws IOException {
video = new NativeVideo(new DebugCallbackHandler(this, 1280, 720), WIDTH, HEIGHT);
}
public void nativeWork() throws InterruptedException {
try{
video.open(videoPath);
} catch (Exception e) {
e.printStackTrace();
}
video.read();
new Thread().start();
video.read(); // Returns 0 length array.
}
}
read()
实现为:
JNIEXPORT void JNICALL Java_ga_nurupeaches_imgmap_natives_NativeVideo_read(JNIEnv* env, jobject jthis, jobject callback){
NativeVideoContext* context = getContext(env, jthis, true);
if(context == NULL){
return;
}
std::cout << "read: reading frame" << std::endl;
while(av_read_frame(context->formatContext, &(context->packet)) >= 0){
std::cout << "read: recv packet" << std::endl;
if(context->packet.stream_index == context->videoStreamId){
std::cout << "read: recv video packet" << std::endl;
avcodec_decode_video2(context->codecContext, context->rawFrame, &(context->frameFinished), &(context->packet));
std::cout << "read: decoded video" << std::endl;
if(context->frameFinished){
std::cout << "read: finished frame; scaling" << std::endl;
sws_scale(context->imgConvertContext, (const uint8_t* const*)context->rawFrame->data,
context->rawFrame->linesize, 0, context->codecContext->height,
context->rgbFrame->data, context->rgbFrame->linesize);
std::cout << "read: scaled image; freeing packet and breaking loop" << std::endl;
av_free_packet(&(context->packet));
break;
}
}
av_free_packet(&(context->packet));
}
std::cout << "read: init final returning" << std::endl;
std::cout << "read: beforeSetByteArrayRegion javaArray@" << &(context->javaArray) << ";typeid=" << typeid(context->javaArray).name() << ";bufferSize=" << context->bufferSize << std::endl;
env->SetByteArrayRegion(context->javaArray, 0, context->bufferSize, (jbyte*)(context->rgbFrame->data[0]));
doCallback(env, callback, context->javaArray);
std::cout << "read: afterSetByteArrayRegion javaArray@" << &(context->javaArray) << ";typeid=" << typeid(context->javaArray).name() << ";bufferSize=" << context->bufferSize << std::endl;
}
日志:
_initialize: found id
2764800
entry@getTag: checking jvmti
tag@getTag: null
entry@setTag: checking jvmti
tag@setTag: 140074857163376
entry _open: grabbing context
entry@getTag: checking jvmti
tag@getTag: 140074857163376
_open: nullcheck context
_open: setting source and formatContext
_open: opening input
_open: finding stream info
_open: finding video stream id
_open: fixing any 0 width/height
_open: finding decoder for codec
_open dbg: codecContext@0x7f65b81a6b20
_open dbg: codec_id=28
_open: opening decoder
_open: alloc frames
_open: nullcheck frames
_open: init swscale context
_open: init buffers
_open: reserving 2764800 bytes of memory for our buffer and etc.
entry@getTag: checking jvmti
tag@getTag: 140074857163376
read: reading frame
read: recv packet
read: recv video packet
read: decoded video
read: recv packet
read: recv video packet
read: decoded video
read: recv packet
read: recv video packet
read: decoded video
read: recv packet
read: recv video packet
read: decoded video
read: recv packet
read: recv video packet
read: decoded video
read: finished frame; scaling
read: scaled image; freeing packet and breaking loop
read: init final returning
read: beforeSetByteArrayRegion javaArray@0x7f65b81a6b10;typeid=P11_jbyteArray;bufferSize=2764800
read: afterSetByteArrayRegion javaArray@0x7f65b81a6b10;typeid=P11_jbyteArray;bufferSize=2764800
entry@getTag: checking jvmti
tag@getTag: 140074857163376
read: reading frame
read: recv packet
read: recv video packet
read: decoded video
read: finished frame; scaling
read: scaled image; freeing packet and breaking loop
read: init final returning
read: beforeSetByteArrayRegion javaArray@0x7f65b81a6b10;typeid=P11_jbyteArray;bufferSize=2764800
data.length=0;rawImage.length=2764800
read: afterSetByteArrayRegion javaArray@0x7f65b81a6b10;typeid=P11_jbyteArray;bufferSize=2764800
Exception in thread "main" java.lang.ArrayStoreException
at java.lang.System.arraycopy(Native Method)
at ga.nurupeaches.imgmap.natives.DebugCallbackHandler.handleData(DebugCallbackHandler.java:30)
at ga.nurupeaches.imgmap.natives.NativeVideo.read(Native Method)
at ga.nurupeaches.imgmap.natives.NativeVideo.read(NativeVideo.java:31)
at ga.nurupeaches.imgmap.natives.NativeVideoTest.nativeWork(NativeVideoTest.java:33)
at ga.nurupeaches.imgmap.natives.NativeVideoTest.main(NativeVideoTest.java:18)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
Process finished with exit code 1
https://github.com/CirnoTheGenius/ImgMap-rw/tree/master/src/main/cplusplus
提供了其他任何内容的相关代码作为一个说明,请原谅我在C ++中犯的任何错误;我仍然相对较新。
答案 0 :(得分:0)
我没有通过NewByteArray
使用Java字节数组,而是直接传递直接ByteBuffer
并获取ByteBuffer
的内存区域地址并使用{{1}将memcpy
复制到其中,然后调用context->rgbFrame->data[0]
而不使用任何参数来通知回调处理程序它已完成。