Qt - 使用FFmpeg库的{H.264视频流

时间:2016-07-28 12:50:59

标签: qt video ffmpeg video-streaming h.264

我想在我的Qt Widget应用程序中获取我的IP摄像头流。首先,我连接到IP摄像机的UDP端口。 IP摄像机正在传输H.264编码视频。套接字绑定后,在每个readyRead()信号上,我用接收到的数据报填充缓冲区以获得全帧。

变量初始化:

AVCodec *codec;
AVCodecContext *codecCtx;
AVFrame *frame;
AVPacket packet;
this->buffer.clear();
this->socket = new QUdpSocket(this);

QObject::connect(this->socket, &QUdpSocket::connected, this, &H264VideoStreamer::connected);
QObject::connect(this->socket, &QUdpSocket::disconnected, this, &H264VideoStreamer::disconnected);
QObject::connect(this->socket, &QUdpSocket::readyRead, this, &H264VideoStreamer::readyRead);
QObject::connect(this->socket, &QUdpSocket::hostFound, this, &H264VideoStreamer::hostFound);
QObject::connect(this->socket, SIGNAL(error(QAbstractSocket::SocketError)), this, SLOT(error(QAbstractSocket::SocketError)));
QObject::connect(this->socket, &QUdpSocket::stateChanged, this, &H264VideoStreamer::stateChanged);

avcodec_register_all();

codec = avcodec_find_decoder(AV_CODEC_ID_H264);
if (!codec){
   qDebug() << "Codec not found";
   return;
}

codecCtx = avcodec_alloc_context3(codec);
if (!codecCtx){
    qDebug() << "Could not allocate video codec context";
    return;
}

if (codec->capabilities & CODEC_CAP_TRUNCATED)
      codecCtx->flags |= CODEC_FLAG_TRUNCATED;

codecCtx->flags2 |= CODEC_FLAG2_CHUNKS;

AVDictionary *dictionary = nullptr;

if (avcodec_open2(codecCtx, codec, &dictionary) < 0) {
    qDebug() << "Could not open codec";
    return;
}

算法如下:

void H264VideoImageProvider::readyRead() {
QByteArray datagram;
datagram.resize(this->socket->pendingDatagramSize());
QHostAddress sender;
quint16 senderPort;

this->socket->readDatagram(datagram.data(), datagram.size(), &sender, &senderPort);

QByteArray rtpHeader = datagram.left(12);
datagram.remove(0, 12);

int nal_unit_type = datagram[0] & 0x1F;
bool start = (datagram[1] & 0x80) != 0;

int seqNo = rtpHeader[3] & 0xFF;

qDebug() << "H264 video decoder::readyRead()"
         << "from: " << sender.toString() << ":" << QString::number(senderPort)
         << "\n\tDatagram size: " << QString::number(datagram.size())
         << "\n\tH264 RTP header (hex): " << rtpHeader.toHex()
         << "\n\tH264 VIDEO data (hex): " << datagram.toHex();

qDebug() << "nal_unit_type = " << nal_unit_type << " - " << getNalUnitTypeStr(nal_unit_type);
if (start)
    qDebug() << "START";

if (nal_unit_type == 7){
    this->sps = datagram;
    qDebug() << "Sequence parameter found = " << this->sps.toHex();
    return;
} else if (nal_unit_type == 8){
    this->pps = datagram;
    qDebug() << "Picture parameter found = " << this->pps.toHex();
    return;
}

//VIDEO_FRAME
if (start){
    if (!this->buffer.isEmpty())
        decodeBuf();

    this->buffer.clear();
    qDebug() << "Initializing new buffer...";

    this->buffer.append(char(0x00));
    this->buffer.append(char(0x00));
    this->buffer.append(char(0x00));
    this->buffer.append(char(0x01));

    this->buffer.append(this->sps);

    this->buffer.append(char(0x00));
    this->buffer.append(char(0x00));
    this->buffer.append(char(0x00));
    this->buffer.append(char(0x01));

    this->buffer.append(this->pps);

    this->buffer.append(char(0x00));
    this->buffer.append(char(0x00));
    this->buffer.append(char(0x00));
    this->buffer.append(char(0x01));
}

qDebug() << "Appending buffer data...";
this->buffer.append(datagram);
}
  • 数据报的前12个字节是RTP头
  • 其他一切都是视频数据
  • 第一个VIDEO DATA字节的最后5位,表示它是哪个NAL单元类型。我总是得到以下4个值之一(1 - 编码的非IDR切片,5个代码IDR切片,7个SPS,8个PPS)
  • 第二个VIDEO DATA字节中的第5位表示该数据报是否为帧中的START数据
  • 所有视频数据都以START
  • 开头存储在缓冲区中
  • 一旦新帧到达 - 设置START,解码并生成新缓冲区
  • 用于解码的帧生成如下:

    00 00 00 01

    SPS

    00 00 00 01

    PPS

    00 00 00 01

    连接视频数据

  • 使用FFmpeg库中的avcodec_decode_video2()函数进行解码

    void H264VideoStreamer::decode() {
    
    av_init_packet(&packet);
    
    av_new_packet(&packet, this->buffer.size());
    memcpy(packet.data, this->buffer.data_ptr(), this->buffer.size());
    packet.size = this->buffer.size();    
    
    frame = av_frame_alloc();
    if(!frame){
        qDebug() << "Could not allocate video frame";
        return;
    }
    
    int got_frame = 1;
    
    int len = avcodec_decode_video2(codecCtx, frame, &got_frame, &packet);
    
    if (len < 0){
        qDebug() << "Error while encoding frame.";
        return;
    }
    
    //if(got_frame > 0){ // got_frame is always 0
    //    qDebug() << "Data decoded: " << frame->data[0];
    //}
    
    char * frameData = (char *) frame->data[0];
    QByteArray decodedFrame;
    decodedFrame.setRawData(frameData, len);
    
    qDebug() << "Data decoded: " << decodedFrame;
    
    av_frame_unref(frame);
    av_free_packet(&packet);
    
    emit imageReceived(decodedFrame);
    }
    

我的想法是在UI线程中接收imageReceived信号,直接在QImage中转换decodeFrame,并在解码新帧并将其发送到UI后刷新它。

这是解码H.264流的好方法吗?我面临以下问题:

  • avcodec_decode_video2()返回与编码缓冲区大小相同的值。编码和解码日期是否可能始终相同?
  • got_frame总是0,所以这意味着我从未真正收到过结果中的全帧。可能是什么原因?视频帧错误创建?或视频帧错误地从QByteArray转换为AVframe?
  • 如何将解码后的AVframe转换回QByteArray,是否可以简单地转换为QImage?

1 个答案:

答案 0 :(得分:1)

手动渲染帧的整个过程可以留给另一个库。如果唯一目的是使用来自IP摄像机的实时馈送的Qt GUI,则可以使用libvlc库。您可以在此处找到示例:https://wiki.videolan.org/LibVLC_SampleCode_Qt