videoInput是否保证rgb摄像头输入? (从videoInput / dshow传输图像 - > Java BufferedImage)

时间:2012-09-19 20:29:05

标签: c++ visual-c++ directshow color-space

我正在使用videoInput从我的网络摄像头获取实时信息流,但我遇到了一个问题,其中videoInput的文档暗示我应该总是获得BGR / RGB,但是,“详细”输出告诉我像素格式是YUY2。

***** VIDEOINPUT LIBRARY - 0.1995 - TFW07 *****
SETUP: Setting up device 0
SETUP: 1.3M WebCam
SETUP: Couldn't find preview pin using SmartTee
SETUP: Default Format is set to 640 by 480 
SETUP: trying format RGB24 @ 640 by 480
SETUP: trying format RGB32 @ 640 by 480
SETUP: trying format RGB555 @ 640 by 480
SETUP: trying format RGB565 @ 640 by 480
SETUP: trying format YUY2 @ 640 by 480
SETUP: Capture callback set
SETUP: Device is setup and ready to capture.

我的第一个想法是尝试转换为RGB(假设我真的得到YUY2数据),我最终得到一个高度扭曲的蓝色图像。

这是我将YUY2转换为BGR的代码(注意:这是一个更大的程序的一部分,这是借来的代码 - 我可以根据任何人的要求获取网址):

#define CLAMP_MIN( in, min ) ((in) < (min))?(min):(in)
#define CLAMP_MAX( in, max ) ((in) > (max))?(max):(in)

#define FIXNUM 16
#define FIX(a, b) ((int)((a)*(1<<(b))))
#define UNFIX(a, b) ((a+(1<<(b-1)))>>(b))

#define ICCIRUV(x) (((x)<<8)/224)
#define ICCIRY(x) ((((x)-16)<<8)/219)
#define CLIP(t) CLAMP_MIN( CLAMP_MAX( (t), 255 ), 0 )
#define GET_R_FROM_YUV(y, u, v) UNFIX((FIX(1.0, FIXNUM)*(y) + FIX(1.402, FIXNUM)*(v)), FIXNUM)
#define GET_G_FROM_YUV(y, u, v) UNFIX((FIX(1.0, FIXNUM)*(y) + FIX(-0.344, FIXNUM)*(u) + FIX(-0.714, FIXNUM)*(v)), FIXNUM)
#define GET_B_FROM_YUV(y, u, v) UNFIX((FIX(1.0, FIXNUM)*(y) + FIX(1.772, FIXNUM)*(u)), FIXNUM)
bool yuy2_to_rgb24(int streamid) {
    int i;
    unsigned char y1, u, y2, v;
    int Y1, Y2, U, V;
    unsigned char r, g, b;

    int size = stream[streamid]->config.g_h * (stream[streamid]->config.g_w / 2);
    unsigned long srcIndex = 0;
    unsigned long dstIndex = 0;

    try {

        for(i = 0 ; i < size ; i++) {

            y1 = stream[streamid]->vi_buffer[srcIndex];
            u = stream[streamid]->vi_buffer[srcIndex+ 1];
            y2 = stream[streamid]->vi_buffer[srcIndex+ 2];
            v = stream[streamid]->vi_buffer[srcIndex+ 3];

            Y1 = ICCIRY(y1);
            U = ICCIRUV(u - 128);
            Y2 = ICCIRY(y2);
            V = ICCIRUV(v - 128);

            r = CLIP(GET_R_FROM_YUV(Y1, U, V));
            //r = (unsigned char)CLIP( (1.164f * (float(Y1) - 16.0f)) + (1.596f * (float(V) - 128)) );
            g = CLIP(GET_G_FROM_YUV(Y1, U, V));
            //g = (unsigned char)CLIP( (1.164f * (float(Y1) - 16.0f)) - (0.813f * (float(V) - 128.0f)) - (0.391f * (float(U) - 128.0f)) );
            b = CLIP(GET_B_FROM_YUV(Y1, U, V));
            //b = (unsigned char)CLIP( (1.164f * (float(Y1) - 16.0f)) + (2.018f * (float(U) - 128.0f)) );


            stream[streamid]->rgb_buffer[dstIndex] = b;
            stream[streamid]->rgb_buffer[dstIndex + 1] = g;
            stream[streamid]->rgb_buffer[dstIndex + 2] = r;

            dstIndex += 3;

            r = CLIP(GET_R_FROM_YUV(Y2, U, V));
            //r = (unsigned char)CLIP( (1.164f * (float(Y2) - 16.0f)) + (1.596f * (float(V) - 128)) );
            g = CLIP(GET_G_FROM_YUV(Y2, U, V));
            //g = (unsigned char)CLIP( (1.164f * (float(Y2) - 16.0f)) - (0.813f * (float(V) - 128.0f)) - (0.391f * (float(U) - 128.0f)) );
            b = CLIP(GET_B_FROM_YUV(Y2, U, V));
            //b = (unsigned char)CLIP( (1.164f * (float(Y2) - 16.0f)) + (2.018f * (float(U) - 128.0f)) );

            stream[streamid]->rgb_buffer[dstIndex] = b;
            stream[streamid]->rgb_buffer[dstIndex + 1] = g;
            stream[streamid]->rgb_buffer[dstIndex + 2] = r;

            dstIndex += 3;

            srcIndex += 4;
        }

        return true;
    } catch(...) {
        return false;
    }
}

在这不起作用之后,我假设a)我的色彩空间转换功能错误,或者b)videoInput对我说谎。

好吧,我想仔细检查一下videoInput确实告诉了我真相,事实证明我完全没办法看到我从videoInput :: getPixels()函数获得的像素格式冗长的文字(除非我非常疯狂,只是看不到它)。这使我假设videoInput可能在幕后进行某种颜色空间转换,因此无论网络摄像头如何,您总能获得一致的图像。考虑到这一点,并遵循videoInput.h:96中的一些文档,它似乎只是发出RGB或BGR图像。

我用来显示图像的实用程序采用RGB图像(Java BufferedImage),所以我想我可以直接从videoInput中提供原始数据,它应该没问题。

以下是我在Java中设置图像的方法:

BufferedImage buffer = new BufferedImage(directShow.device_stream_width(stream),directShow.device_stream_height(stream), BufferedImage.TYPE_INT_RGB );

int rgbdata[] = directShow.grab_frame_stream(stream);
if( rgbdata.length > 0 ) {
  buffer.setRGB(
    0, 0,
    directShow.device_stream_width(stream),
    directShow.device_stream_height(stream),
    rgbdata,
    0, directShow.device_stream_width(stream)
  );
}

以下是我将它发送到Java(C ++ / JNI)的方法:

JNIEXPORT jintArray JNICALL Java_directshowcamera_dsInterface_grab_1frame_1stream(JNIEnv *env, jobject obj, jint streamid)
{
    //jclass bbclass = env->FindClass( "java/nio/IntBuffer" );
    //jmethodID putMethod = env->GetMethodID(bbclass, "put", "(B)Ljava/nio/IntBuffer;");
    int buffer_size;
    jintArray ia;
    jint *intbuffer = NULL;
    unsigned char *buffer = NULL;

    append_stream( streamid );

    buffer_size = stream_device_rgb24_size(streamid);
    ia = env->NewIntArray( buffer_size );
    intbuffer = (jint *)calloc( buffer_size, sizeof(jint) );

    buffer = stream_device_buffer_rgb( streamid );
    if( buffer == NULL ) {
        env->DeleteLocalRef( ia );
        return env->NewIntArray( 0 );
    }

    for(int i=0; i < buffer_size; i++ ) {
        intbuffer[i] = (jint)buffer[i];
    }
    env->SetIntArrayRegion( ia, 0, buffer_size, intbuffer );

    free( intbuffer );

    return ia;
}

这让我在过去两周绝对疯狂,而且我也尝试过任何建议的变化,绝对没有成功。

0 个答案:

没有答案