所以要解释一下我拥有的东西;我的应用程序是一个实时相机处理应用程序。它从片段中获取当前预览帧的位图,将其发送到主活动,主活动处理位图并将其显示到另一个仅包含imageView
的片段。
所以循环有点像这样:
Bitmap from Camera Live Preview Frame -> Process (Renderscript) -> Display output on ImageView.
流程部分需要尽快完成。
这是我的片段:
public class CleanPreviewFragment extends Fragment implements TextureView.SurfaceTextureListener {
.......
....
...
@Override
public void onSurfaceTextureUpdated(SurfaceTexture surfaceTexture) {
Bitmap origBmp = mTextureView.getBitmap();
mCallback.onCleanPreviewBitmapUpdated(origBmp);
}
}
我的主要活动:
public class MainPreviewActivity extends Activity
implements CleanPreviewFragment.OnBitmapUpdatedListener {
.......
.....
....
if(processedPreviewFragment != null) {
processedPreviewFragment.setImageViewBitmap(
bProcessor.processBitmap(origBmp)
);
}
}
}
我的处理器类:
public class BProcessor {
......
....
public Bitmap processBitmap(Bitmap bmp) {
Bitmap modifiedBitmap;
switch(function) {
case 0:
modifiedBitmap = simplefilters.inversePixels(bmp);
break;
case 5:
modifiedBitmap = edgeDetection.apply(bmp);
break;
default:
modifiedBitmap = bmp;
break;
}
return modifiedBitmap;
}
}
例如我的边缘检测类和RenderScript。
注意:Renderscript没有edge detection
它只是平均像素。
公共类EdgeDetection {
private Allocation inAllocation;
private Allocation outAllocation;
private RenderScript mRS = null;
private ScriptC_edgedetect mScript = null;
public EdgeDetection(Context ctx) {
mRS = RenderScript.create(ctx);
mScript = new ScriptC_edgedetect(mRS, ctx.getResources(), R.raw.edgedetect);
}
public Bitmap apply(Bitmap origBmp) {
int width = origBmp.getWidth();
int height = origBmp.getHeight();
//Bitmap bmpCopy = origBmp.copy(origBmp.getConfig(), true);
inAllocation = Allocation.createFromBitmap(mRS, origBmp);
outAllocation = Allocation.createFromBitmap(mRS, Bitmap.createBitmap(width,height,Bitmap.Config.ARGB_8888));
mScript.set_width(width-1);
mScript.set_height(height-1);
mScript.set_inPixels(inAllocation);
mScript.forEach_root(inAllocation,outAllocation);
outAllocation.copyTo(origBmp);
return origBmp;
}
}
的renderScript:
#pragma version(1)
#pragma rs java_package_name(com.apps.foo.bar)
rs_allocation inPixels;
int height;
int width;
uchar4 RS_KERNEL root(uchar4 in, uint32_t x, uint32_t y) {
uchar4 out;
float4 pixel = convert_float4(in).rgba;
pixel.r = (pixel.r + pixel.g + pixel.b)/3;
pixel.g = (pixel.r + pixel.g + pixel.b)/3;
pixel.b = (pixel.r + pixel.g + pixel.b)/3;
out = convert_uchar4(pixel);
return out;
}
现在,当我对我的应用程序进行概要分析onCleanPreviewBitmapUpdated
需要大量资源(被调用很多次等等)。
现在在我的EdgeDetection
课程中,这些问题困扰着我:
public Bitmap apply(Bitmap origBmp) {
int width = origBmp.getWidth();
int height = origBmp.getHeight();
//Bitmap bmpCopy = origBmp.copy(origBmp.getConfig(), true);
inAllocation = Allocation.createFromBitmap(mRS, origBmp);
outAllocation = Allocation.createFromBitmap(mRS, Bitmap.createBitmap(width,height,Bitmap.Config.ARGB_8888));
mScript.set_width(width-1);
mScript.set_height(height-1);
mScript.set_inPixels(inAllocation);
mScript.forEach_root(inAllocation,outAllocation);
outAllocation.copyTo(origBmp);
return origBmp;
}
我曾经有过:
Bitmap bmpCopy = origBmp.copy(origBmp.getConfig(), true);
outAllocation = Allocation.createFromBitmap(mRS, bmpCopy));
它仍然有效。我也有:
//Bitmap bmpCopy = origBmp.copy(origBmp.getConfig(), true);
outAllocation = Allocation.createFromBitmap(mRS, origBmp));
并且它仍然有效(如果有人可以解释为什么第二部分工作会很好)。
但是我想避免以origBmp.copy(origBmp.getConfig(), true);
进行操作,因为它们可能会很慢吗?所以我现在正在使用它:
outAllocation = Allocation.createFromBitmap(mRS, Bitmap.createBitmap(width,height,Bitmap.Config.ARGB_8888));
我正在创建一个空的Bitmap并传递给Allocation。它是否正确?
现在它有效,但它有效吗?似乎是因为Bitmap.Config.ARGB_8888
。我必须将我的Renderscript返回值更改为uchar4
而不是uchar3
才能使其正常工作。
由于