一方面,我有一个Surface类,它在实例化时会自动初始化一个新线程并开始通过基于FFMPEG的本机代码从流源获取帧。以下是上述Surface Class的代码的主要部分:
public class StreamingSurface extends Surface implements Runnable {
...
public StreamingSurface(SurfaceTexture surfaceTexture, int width, int height) {
super(surfaceTexture);
screenWidth = width;
screenHeight = height;
init();
}
public void init() {
mDrawTop = 0;
mDrawLeft = 0;
mVideoCurrentFrame = 0;
this.setVideoFile();
this.startPlay();
}
public void setVideoFile() {
// Initialise FFMPEG
naInit("");
// Get stream video res
int[] res = naGetVideoRes();
mDisplayWidth = (int)(res[0]);
mDisplayHeight = (int)(res[1]);
// Prepare Display
mBitmap = Bitmap.createBitmap(mDisplayWidth, mDisplayHeight, Bitmap.Config.ARGB_8888);
naPrepareDisplay(mBitmap, mDisplayWidth, mDisplayHeight);
}
public void startPlay() {
thread = new Thread(this);
thread.start();
}
@Override
public void run() {
while (true) {
while (2 == mStatus) {
//pause
SystemClock.sleep(100);
}
mVideoCurrentFrame = naGetVideoFrame();
if (0 < mVideoCurrentFrame) {
//success, redraw
if(isValid()){
Canvas canvas = lockCanvas(null);
if (null != mBitmap) {
canvas.drawBitmap(mBitmap, mDrawLeft, mDrawTop, prFramePaint);
}
unlockCanvasAndPost(canvas);
}
} else {
//failure, probably end of video, break
naFinish(mBitmap);
mStatus = 0;
break;
}
}
}
}
在我的MainActivity类中,我按以下方式实例化了这个类:
public void startCamera(int texture)
{
mSurface = new SurfaceTexture(texture);
mSurface.setOnFrameAvailableListener(this);
Surface surface = new StreamingSurface(mSurface, 640, 360);
surface.release();
}
我在Android开发者页面中阅读了以下关于Surface类构造函数的行:
&#34;绘制到Surface的图像将可用于SurfaceTexture,它可以通过updateTexImage()将它们附加到OpenGL ES纹理。&#34;
这正是我想做的事情,我已准备好进行进一步的渲染。但绝对是,通过上面的代码,我永远不会将在Surface类中捕获的帧转换为相应的SurfaceTexture。我知道这是因为调试器,对于instace,从不调用与该Surface Texture相关联的OnFrameAvailableLister方法。
有什么想法吗?也许我使用线程调用绘图函数的事实是搞乱一切?在这种情况下,我有什么选择来抓住帧?
提前致谢