使用ffmpeg编码原始nv12帧

时间:2012-07-31 07:41:06

标签: ffmpeg

我正在尝试以nv12格式编码原始帧。帧率为15.我使用avcodec进行编码。我的捕获设备具有回调功能,当原始取景器框架可用时,该功能会被激活。我正在复制原始取景器框架并从数据中制作AVFrame。然后我将帧提供给avcodec_encode_video,如api示例中所述,但不知怎的,我没有得到预期的结果。我正在使用posix线程。我将原始帧保留在缓冲区中。然后我的编码器线程从缓冲区收集数据并对其进行编码。编码速度太慢(h264和mpeg1 -tested)。我的线程或其他什么问题?我很茫然。输出很神秘。整个编码过程是单个函数和单线程,但我发现一堆帧一次编码。编码器的功能究竟如何?下面是编码的代码片段:

while(cameraRunning)
{
    pthread_mutex_lock(&lock_encoder);
    if(vr->buffer->getTotalData()>0)
    {
        i++;
        fprintf(stderr,"Encoding %d\n",i);
        AVFrame *picture;
        int y = 0,x;
        picture = avcodec_alloc_frame();
        av_image_alloc(picture->data, picture->linesize,c->width, c->height,c->pix_fmt, 1);
        uint8_t* buf_source = new uint8_t[vr->width*vr->height*3/2];
        uint8_t* data = vr->buffer->Read(vr->width*vr->height*3/2);
        memcpy(buf_source,data,vr->width*vr->height*3/2);
        //free(&vr->buffer->Buffer[vr->buffer->getRead()-1][0]);
        /*for (y = 0; y < vr->height*vr->width; y++)
        {
            picture->data[0][(y/vr->width) * picture->linesize[0] + (y%vr->width)] = buf_source[(y/vr->width)+(y%vr->width)]x + y + i * 7;
            if(y<vr->height*vr->width/4)
            {
                picture->data[1][(y/vr->width) * picture->linesize[1] + (y%vr->width)] = buf_source[vr->width*vr->height + 2 * ((y/vr->width)+(y%vr->width))]128 + y + i * 2;
                picture->data[2][(y/vr->width) * picture->linesize[2] + (y%vr->width)] = buf_source[vr->width*vr->height +  2 * ((y/vr->width)+(y%vr->width)) + 1]64 + x + i * 5;
            }
        }

* /

        for(y=0;y<c->height;y++) {
                    for(x=0;x<c->width;x++) {
                        picture->data[0][y * picture->linesize[0] + x] = x + y + i * 7;
                    }
                }

                /* Cb and Cr */
                for(y=0;y<c->height/2;y++) {
                    for(x=0;x<c->width/2;x++) {
                        picture->data[1][y * picture->linesize[1] + x] = 128 + y + i * 2;
                        picture->data[2][y * picture->linesize[2] + x] = 64 + x + i * 5;
                    }
                }
        free(buf_source);
        fprintf(stderr,"Data ready\n");

        outbuf_size = 100000 + c->width*c->height*3/2;
        outbuf = (uint8_t*)malloc(outbuf_size);
        fprintf(stderr,"Preparation done!!!\n");
        out_size = avcodec_encode_video(c, outbuf, outbuf_size, picture);
        had_output |= out_size;
        printf("encoding frame %3d (size=%5d)\n", i, out_size);
        fwrite(outbuf, 1, out_size, f);
        av_free(picture->data[0]);
        av_free(picture);

    }
    pthread_mutex_unlock(&lock_encoder);
}

1 个答案:

答案 0 :(得分:0)

您可以在libswscale中使用sws_scale进行颜色空间转换。首先使用sws_getContext创建一个指定源(NV12)和目标(YUV420P)的SwsContext。

m_pSwsCtx = sws_getContext(picture_width, 
               picture_height, 
               PIX_FMT_NV12, 
               picture_width, 
               picture_height,
               PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);

然后当你想要每帧进行转换时,

sws_scale(m_pSwsCtx, frameData, frameLineSize, 0, frameHeight,
    outFrameData, outFrameLineSize);