什么传递给avcodec_decode_video2用于H.264传输流?

时间:2016-11-24 16:48:00

标签: c++ ffmpeg video-processing

我想从一组MPEG-2传输流数据包中解码H.264视频,但我不清楚要传递给avcodec_decode_video2

The documentation表示传递"包含输入缓冲区的输入AVPacket。"

但输入缓冲区应该是什么?

PES数据包将分布在多个TS数据包的有效负载部分,其中NALU位于PES内部。那么通过一个TS片段?整个PES?仅限PES有效负载?

Sample Code提及:

  

但是其他一些编解码器(msmpeg4,mpeg4)本质上是基于帧的,所以   你必须准确地用一帧的所有数据来调用它们。你必须   也初始化'宽度'和'身高'在初始化它们之前。

但我找不到关于什么"所有数据"意味着...

传递TS数据包有效负载的片段无效:

AVPacket avDecPkt;
av_init_packet(&avDecPkt);
avDecPkt.data = inbuf_ptr;
avDecPkt.size = esBufSize;

len = avcodec_decode_video2(mpDecoderContext, mpFrameDec, &got_picture, &avDecPkt);
if (len < 0)
{
    printf("  TS PKT #%.0f. Error decoding frame #%04d [rc=%d '%s']\n",
        tsPacket.pktNum, mDecodedFrameNum, len, av_make_error_string(errMsg, 128, len));
    return;
}

输出

[h264 @ 0x81cd2a0] no frame!
TS PKT #2973. Error decoding frame #0001 [rc=-1094995529 'Invalid data found when processing input']

修改

使用来自WLGfx的出色点击,我制作了这个简单的程序来尝试解码TS数据包。作为输入,我准备了一个包含来自视频PID的 TS数据包的文件。

感觉很接近,但我不知道如何设置FormatContext。下面的代码段落在av_read_frame()(以及ret = s->iformat->read_packet(s, pkt)内部)。 s-&gt; iformat为零。

建议?

EDIT II - 抱歉,获取的帖子来源代码** **编辑III - 更新示例代码以模拟读取TS PKT队列

/*
 * Test program for video decoder
 */

#include <stdio.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

extern "C" {

#ifdef __cplusplus
    #define __STDC_CONSTANT_MACROS
    #ifdef _STDINT_H
        #undef _STDINT_H
    #endif
    #include <stdint.h>
#endif
}

extern "C" {
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#include "libswscale/swscale.h"
#include "libavutil/imgutils.h"
#include "libavutil/opt.h"
}


class VideoDecoder
{
public:
    VideoDecoder();
    bool rcvTsPacket(AVPacket &inTsPacket);

private:
    AVCodec         *mpDecoder;
    AVCodecContext  *mpDecoderContext;
    AVFrame         *mpDecodedFrame;
    AVFormatContext *mpFmtContext;

};

VideoDecoder::VideoDecoder()
{
    av_register_all();

    // FORMAT CONTEXT SETUP
    mpFmtContext = avformat_alloc_context();
    mpFmtContext->flags = AVFMT_NOFILE;
    // ????? WHAT ELSE ???? //

    // DECODER SETUP
    mpDecoder = avcodec_find_decoder(AV_CODEC_ID_H264);
    if (!mpDecoder)
    {
        printf("Could not load decoder\n");
        exit(11);
    }

    mpDecoderContext = avcodec_alloc_context3(NULL);
    if (avcodec_open2(mpDecoderContext, mpDecoder, NULL) < 0)
    {
        printf("Cannot open decoder context\n");
        exit(1);
    }

    mpDecodedFrame = av_frame_alloc();
}

bool
VideoDecoder::rcvTsPacket(AVPacket &inTsPkt)
{
    bool ret = true;

    if ((av_read_frame(mpFmtContext, &inTsPkt)) < 0)
    {
        printf("Error in av_read_frame()\n");
        ret = false;
    }
    else
    {
        // success.  Decode the TS packet
        int got;
        int len = avcodec_decode_video2(mpDecoderContext, mpDecodedFrame, &got, &inTsPkt);
        if (len < 0)
            ret = false;

        if (got)
            printf("GOT A DECODED FRAME\n");
    }

    return ret;
}

int
main(int argc, char **argv)
{
    if (argc != 2)
    {
        printf("Usage: %s tsInFile\n", argv[0]);
        exit(1);
    }

    FILE *tsInFile = fopen(argv[1], "r");
    if (!tsInFile)
    {
        perror("Could not open TS input file");
        exit(2);
    }

    unsigned int tsPktNum = 0;
    uint8_t      tsBuffer[256];
    AVPacket     tsPkt;
    av_init_packet(&tsPkt);

    VideoDecoder vDecoder;

    while (!feof(tsInFile))
    {
        tsPktNum++;

        tsPkt.size = 188;
        tsPkt.data = tsBuffer;
        fread(tsPkt.data, 188, 1, tsInFile);

        vDecoder.rcvTsPacket(tsPkt);
    }
}

2 个答案:

答案 0 :(得分:1)

我已经获得了一些可能会帮助您的代码段,因为我也在使用MPEG-TS。

从我的数据包线程开始,根据我已经找到并获得编解码器上下文的流ID来检查每个数据包:

void *FFMPEG::thread_packet_function(void *arg) {
    FFMPEG *ffmpeg = (FFMPEG*)arg;
    for (int c = 0; c < MAX_PACKETS; c++)
        ffmpeg->free_packets[c] = &ffmpeg->packet_list[c];
    ffmpeg->packet_pos = MAX_PACKETS;

    Audio.start_decoding();
    Video.start_decoding();
    Subtitle.start_decoding();

    while (!ffmpeg->thread_quit) {
        if (ffmpeg->packet_pos != 0 &&
                Audio.okay_add_packet() &&
                Video.okay_add_packet() &&
                Subtitle.okay_add_packet()) {

            pthread_mutex_lock(&ffmpeg->packet_mutex); // get free packet
            AVPacket *pkt = ffmpeg->free_packets[--ffmpeg->packet_pos]; // pre decrement
            pthread_mutex_unlock(&ffmpeg->packet_mutex);

            if ((av_read_frame(ffmpeg->fContext, pkt)) >= 0) { // success
                int id = pkt->stream_index;
                if (id == ffmpeg->aud_stream.stream_id) Audio.add_packet(pkt);
                else if (id == ffmpeg->vid_stream.stream_id) Video.add_packet(pkt);
                else if (id == ffmpeg->sub_stream.stream_id) Subtitle.add_packet(pkt);
                else { // unknown packet
                    av_packet_unref(pkt);

                    pthread_mutex_lock(&ffmpeg->packet_mutex); // put packet back
                    ffmpeg->free_packets[ffmpeg->packet_pos++] = pkt;
                    pthread_mutex_unlock(&ffmpeg->packet_mutex);

                    //LOGI("Dumping unknown packet, id %d", id);
                }
            } else {
                av_packet_unref(pkt);

                pthread_mutex_lock(&ffmpeg->packet_mutex); // put packet back
                ffmpeg->free_packets[ffmpeg->packet_pos++] = pkt;
                pthread_mutex_unlock(&ffmpeg->packet_mutex);

                //LOGI("No packet read");
            }
        } else { // buffers full so yield
            //LOGI("Packet reader on hold: Audio-%d, Video-%d, Subtitle-%d",
            //  Audio.packet_pos, Video.packet_pos, Subtitle.packet_pos);
            usleep(1000);
            //sched_yield();
        }
    }
    return 0;
}

音频,视频和字幕的每个解码器都有自己的线程,这些线程在环形缓冲区中接收来自上述线程的数据包。我不得不将解码器分成他们自己的线程,因为当我开始使用去隔行滤波器时CPU使用率正在增加。

我的视频解码器从缓冲区中读取数据包,当数据包完成后,将数据包发送回去,并且可以再次使用。一切都在运行,平衡数据包缓冲区并不会花费那么多时间。

这是从我的视频解码器剪下来的:

void *VideoManager::decoder(void *arg) {
    LOGI("Video decoder started");
    VideoManager *mgr = (VideoManager *)arg;
    while (!ffmpeg.thread_quit) {
        pthread_mutex_lock(&mgr->packet_mutex);
        if (mgr->packet_pos != 0) {
            // fetch first packet to decode
            AVPacket *pkt = mgr->packets[0];

            // shift list down one
            for (int c = 1; c < mgr->packet_pos; c++) {
                mgr->packets[c-1] = mgr->packets[c];
            }
            mgr->packet_pos--;
            pthread_mutex_unlock(&mgr->packet_mutex); // finished with packets array

            int got;
            AVFrame *frame = ffmpeg.vid_stream.frame;
            avcodec_decode_video2(ffmpeg.vid_stream.context, frame, &got, pkt);
            ffmpeg.finished_with_packet(pkt);
            if (got) {
#ifdef INTERLACE_ALL
                if (!frame->interlaced_frame) mgr->add_av_frame(frame, 0);
                else {
                    if (!mgr->filter_initialised) mgr->init_filter_graph(frame);
                    av_buffersrc_add_frame_flags(mgr->filter_src_ctx, frame, AV_BUFFERSRC_FLAG_KEEP_REF);
                    int c = 0;
                    while (true) {
                        AVFrame *filter_frame = ffmpeg.vid_stream.filter_frame;
                        int result = av_buffersink_get_frame(mgr->filter_sink_ctx, filter_frame);
                        if (result == AVERROR(EAGAIN) ||
                                result == AVERROR(AVERROR_EOF) ||
                                result < 0)
                            break;
                        mgr->add_av_frame(filter_frame, c++);
                        av_frame_unref(filter_frame);
                    }
                    //LOGI("Interlaced %d frames, decode %d, playback %d", c, mgr->decode_pos, mgr->playback_pos);
                }
#elif defined(INTERLACE_HALF)
                if (!frame->interlaced_frame) mgr->add_av_frame(frame, 0);
                else {
                    if (!mgr->filter_initialised) mgr->init_filter_graph(frame);
                    av_buffersrc_add_frame_flags(mgr->filter_src_ctx, frame, AV_BUFFERSRC_FLAG_KEEP_REF);
                    int c = 0;
                    while (true) {
                        AVFrame *filter_frame = ffmpeg.vid_stream.filter_frame;
                        int result = av_buffersink_get_frame(mgr->filter_sink_ctx, filter_frame);
                        if (result == AVERROR(EAGAIN) ||
                                result == AVERROR(AVERROR_EOF) ||
                                result < 0)
                            break;
                        mgr->add_av_frame(filter_frame, c++);
                        av_frame_unref(filter_frame);
                    }
                    //LOGI("Interlaced %d frames, decode %d, playback %d", c, mgr->decode_pos, mgr->playback_pos);
                }
#else
                mgr->add_av_frame(frame, 0);
#endif
            }
            //LOGI("decoded video packet");
        } else {
            pthread_mutex_unlock(&mgr->packet_mutex);
        }
    }
    LOGI("Video decoder ended");
}

正如您所看到的,我在来回传递数据包时使用了互斥锁。

一旦获得一个帧,我只需从帧中复制YUV缓冲区以供以后使用到另一个缓冲区列表中。我没有转换YUV,我使用着色器将YUV转换为GPU上的RGB。

下一个片段将我的解码帧添加到我的缓冲区列表中。这可能有助于理解如何处理数据。

void VideoManager::add_av_frame(AVFrame *frame, int field_num) {
    int y_linesize = frame->linesize[0];
    int u_linesize = frame->linesize[1];

    int hgt = frame->height;

    int y_buffsize = y_linesize * hgt;
    int u_buffsize = u_linesize * hgt / 2;

    int buffsize = y_buffsize + u_buffsize + u_buffsize;

    VideoBuffer *buffer = &buffers[decode_pos];

    if (ffmpeg.is_network && playback_pos == decode_pos) { // patched 25/10/16 wlgfx
        buffer->used = false;
        if (!buffer->data) buffer->data = (char*)mem.alloc(buffsize);
        if (!buffer->data) {
            LOGI("Dropped frame, allocation error");
            return;
        }
    } else if (playback_pos == decode_pos) {
        LOGI("Dropped frame, ran out of decoder frame buffers");
        return;
    } else if (!buffer->data) {
        buffer->data = (char*)mem.alloc(buffsize);
        if (!buffer->data) {
            LOGI("Dropped frame, allocation error.");
            return;
        }
    }

    buffer->y_frame = buffer->data;
    buffer->u_frame = buffer->y_frame + y_buffsize;
    buffer->v_frame = buffer->y_frame + y_buffsize + u_buffsize;

    buffer->wid = frame->width;
    buffer->hgt = hgt;

    buffer->y_linesize = y_linesize;
    buffer->u_linesize = u_linesize;

    int64_t pts = av_frame_get_best_effort_timestamp(frame);
    buffer->pts = pts;
    buffer->buffer_size = buffsize;

    double field_add = av_q2d(ffmpeg.vid_stream.context->time_base) * field_num;
    buffer->frame_time = av_q2d(ts_stream) * pts + field_add;

    memcpy(buffer->y_frame, frame->data[0], (size_t) (buffer->y_linesize * buffer->hgt));
    memcpy(buffer->u_frame, frame->data[1], (size_t) (buffer->u_linesize * buffer->hgt / 2));
    memcpy(buffer->v_frame, frame->data[2], (size_t) (buffer->u_linesize * buffer->hgt / 2));

    buffer->used = true;
    decode_pos = (++decode_pos) % MAX_VID_BUFFERS;

    //if (field_num == 0) LOGI("Video %.2f, %d - %d",
    //        buffer->frame_time - Audio.pts_start_time, decode_pos, playback_pos);
}

如果还有其他任何我可以提供的帮助,请给我一个大喊。 : - )

编辑:

我打开视频流上下文的片段,它自动确定编解码器,无论是h264,mpeg2还是其他:

void FFMPEG::open_video_stream() {
    vid_stream.stream_id = av_find_best_stream(fContext, AVMEDIA_TYPE_VIDEO,
                                                -1, -1, &vid_stream.codec, 0);
    if (vid_stream.stream_id == -1) return;

    vid_stream.context = fContext->streams[vid_stream.stream_id]->codec;

    if (!vid_stream.codec || avcodec_open2(vid_stream.context,
            vid_stream.codec, NULL) < 0) {
        vid_stream.stream_id = -1;
        return;
    }

    vid_stream.frame = av_frame_alloc();
    vid_stream.filter_frame = av_frame_alloc();
}

EDIT2:

这就是我打开输入流的方式,无论是文件还是URL。 AVFormatContext是流的主要上下文。

bool FFMPEG::start_stream(char *url_, float xtrim, float ytrim, int gain) {
    aud_stream.stream_id = -1;
    vid_stream.stream_id = -1;
    sub_stream.stream_id = -1;

    this->url = url_;
    this->xtrim = xtrim;
    this->ytrim = ytrim;
    Audio.volume = gain;

    Audio.init();
    Video.init();

    fContext = avformat_alloc_context();

    if ((avformat_open_input(&fContext, url_, NULL, NULL)) != 0) {
        stop_stream();
        return false;
    }

    if ((avformat_find_stream_info(fContext, NULL)) < 0) {
        stop_stream();
        return false;
    }

    // network stream will overwrite packets if buffer is full

    is_network =  url.substr(0, 4) == "udp:" ||
                  url.substr(0, 4) == "rtp:" ||
                  url.substr(0, 5) == "rtsp:" ||
                  url.substr(0, 5) == "http:";  // added for wifi broadcasting ability

    // determine if stream is audio only

    is_mp3 = url.substr(url.size() - 4) == ".mp3";

    LOGI("Stream: %s", url_);

    if (!open_audio_stream()) {
        stop_stream();
        return false;
    }

    if (is_mp3) {
        vid_stream.stream_id = -1;
        sub_stream.stream_id = -1;
    } else {
        open_video_stream();
        open_subtitle_stream();

        if (vid_stream.stream_id == -1) { // switch to audio only
            close_subtitle_stream();
            is_mp3 = true;
        }
    }

    LOGI("Audio: %d, Video: %d, Subtitle: %d",
            aud_stream.stream_id,
            vid_stream.stream_id,
            sub_stream.stream_id);

    if (aud_stream.stream_id != -1) {
        LOGD("Audio stream time_base {%d, %d}",
            aud_stream.context->time_base.num,
            aud_stream.context->time_base.den);
    }

    if (vid_stream.stream_id != -1) {
        LOGD("Video stream time_base {%d, %d}",
            vid_stream.context->time_base.num,
            vid_stream.context->time_base.den);
    }

    LOGI("Starting packet and decode threads");

    thread_quit = false;

    pthread_create(&thread_packet, NULL, &FFMPEG::thread_packet_function, this);

    Display.set_overlay_timout(3.0);

    return true;
}

编辑:(构建AVPacket)

构建一个AVPacket以发送给解码器......

AVPacket packet;
av_init_packet(&packet);
packet.data = myTSpacketdata; // pointer to the TS packet
packet.size = 188;

您应该能够重用该数据包。它可能需要不用。

答案 1 :(得分:0)

您必须首先使用avcodec库从文件中获取压缩帧。然后你可以使用avcodec_decode_video2解码它们。看看这个教程http://dranger.com/ffmpeg/