我目前正在尝试了解FastCV软件包中提供的示例之一。有一个函数在进行内存分配fcvMemAlloc()
,它将字节数和字节对齐作为输入。
在名为FastCVSample.cpp的示例中,必须将内存分配给大小为w x h
的数据块,但是,在分配内存时,它们将总数除以2 。我不明白为什么?如果有人有线索,我会很高兴收到他的来信: - )
这是函数调用 - 请参阅fcvMemAlloc()
JNIEXPORT void
JNICALL Java_com_qualcomm_fastcorner_FastCVSample_update
(
JNIEnv* env,
jobject obj,
jbyteArray img,
jint w,
jint h
)
{
jbyte* jimgData = NULL;
jboolean isCopy = 0;
uint32_t* curCornerPtr = 0;
uint8_t* renderBuffer;
uint64_t time;
float timeMs;
// Get data from JNI
jimgData = env->GetByteArrayElements( img, &isCopy );
renderBuffer = getRenderBuffer( w, h );
lockRenderBuffer();
time = getTimeMicroSeconds();
// jimgData might not be 128 bit aligned.
// fcvColorYUV420toRGB565u8() and other fcv functionality inside
// updateCorners() require 128 bit memory aligned. In case of jimgData
// is not 128 bit aligned, it will allocate memory that is 128 bit
// aligned and copies jimgData to the aligned memory.
uint8_t* pJimgData = (uint8_t*)jimgData;
// Check if camera image data is not aligned.
if( (uintptr_t)jimgData & 0xF )
{
// Allow for rescale if dimensions changed.
if( w != (int)state.alignedImgWidth ||
h != (int)state.alignedImgHeight )
{
if( state.alignedImgBuf != NULL )
{
DPRINTF( "%s %d Creating aligned for preview\n",
__FILE__, __LINE__ );
fcvMemFree( state.alignedImgBuf );
state.alignedImgBuf = NULL;
}
}
// Allocate buffer for aligned data if necessary.
if( state.alignedImgBuf == NULL )
{
state.alignedImgWidth = w;
state.alignedImgHeight = h;
state.alignedImgBuf = (uint8_t*)fcvMemAlloc( w*h*3/2, 16 ); <-----Why this and not fcvMemAlloc( w*h*3, 16 )
}
memcpy( state.alignedImgBuf, jimgData, w*h*3/2 ); <---- same here
pJimgData = state.alignedImgBuf;
}
// Copy the image first in our own buffer to avoid corruption during
// rendering. Not that we can still have corruption in image while we do
// copy but we can't help that.
// if viewfinder is disabled, simply set to gray
if( state.disableVF )
{
// Loop through RGB565 values and set to gray.
uint32_t size = getRenderBufferSize();
for( uint32_t i=0; i<size; i+=2 )
{
renderBuffer[i] = 0x10;
renderBuffer[i+1] = 0x84;
}
}
else
{
fcvColorYUV420toRGB565u8(
pJimgData,
w,
h,
(uint32_t*)renderBuffer );
}
// Perform FastCV Corner processing
updateCorners( (uint8_t*)pJimgData, w, h );
timeMs = ( getTimeMicroSeconds() - time ) / 1000.f;
state.timeFilteredMs =
((state.timeFilteredMs*(29.f/30.f)) + (float)(timeMs/30.f));
// RGB Color conversion
if( !state.enableOverlayPixels )
{
state.numCorners = 0;
}
// Have renderer draw corners on render buffer.
drawCorners( state.corners, state.numCorners );
unlockRenderBuffer();
// Let JNI know we don't need data anymore. this is important!
env->ReleaseByteArrayElements( img, jimgData, JNI_ABORT );
}
答案 0 :(得分:0)
我在以下网站找到了答案:
How to render Android's YUV-NV21 camera image on the background in libgdx with OpenGLES 2.0 in real-time?
他在那里解释说YUV帧的格式是(w x h x 3)/2
,这就是分配特定内存量的原因。
注意:此处还有另一个示例: http://www.codeproject.com/Tips/691062/Resizing-NV-image-using-Nearest-Neighbor-Interpo