我想使用NvPipe对来自摄像机的帧进行编码,并使用FFmpeg通过RTP对其进行流传输。当我想解码流时,我的代码产生以下错误:
[h264 @ 0x7f3c6c007e80] decode_slice_header error
[h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced
[h264 @ 0x7f3c6c007e80] decode_slice_header error
[h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced
[h264 @ 0x7f3c6c007e80] decode_slice_header error
[h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced
[h264 @ 0x7f3c6c007e80] decode_slice_header error
[h264 @ 0x7f3c6c007e80] no frame!
[h264 @ 0x7f3c6c007e80] non-existing PPS 0 referenced 0B f=0/0
Last message repeated 1 times
在另一台PC上,它甚至无法进行流传输,并因av_interleaved_write_frame(..)上的分段错误而失败。 如何使用ffplay / VLC /其他软件正确初始化AVPacket及其时基,以成功发送和接收流?
我的代码:
avformat_network_init();
// init encoder
AVPacket *pkt = new AVPacket();
int targetBitrate = 1000000;
int targetFPS = 30;
const uint32_t width = 640;
const uint32_t height = 480;
NvPipe* encoder = NvPipe_CreateEncoder(NVPIPE_BGRA32, NVPIPE_H264, NVPIPE_LOSSY, targetBitrate, targetFPS);
// init stream output
std::string str = "rtp://127.0.0.1:49990";
AVStream* stream = nullptr;
AVOutputFormat *output_format = av_guess_format("rtp", nullptr, nullptr);;
AVFormatContext *output_format_ctx = avformat_alloc_context();
avformat_alloc_output_context2(&output_format_ctx, output_format, output_format->name, str.c_str());
// open output url
if (!(output_format->flags & AVFMT_NOFILE)){
ret = avio_open(&output_format_ctx->pb, str.c_str(), AVIO_FLAG_WRITE);
}
output_format_ctx->oformat = output_format;
output_format->video_codec = AV_CODEC_ID_H264;
stream = avformat_new_stream(output_format_ctx,nullptr);
stream->id = 0;
stream->codecpar->codec_id = AV_CODEC_ID_H264;
stream->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
stream->codecpar->width = width;
stream->codecpar->height = height;
stream->time_base.den = 1;
stream->time_base.num = targetFPS; // 30fps
/* Write the header */
avformat_write_header(output_format_ctx, nullptr); // this seems to destroy the timebase of the stream
std::vector<uint8_t> rgba(width * height * 4);
std::vector<uint8_t> compressed(rgba.size());
int frameCnt = 0;
// encoding and streaming
while (true)
{
frameCnt++;
// Encoding
// Construct dummy frame
for (uint32_t y = 0; y < height; ++y)
for (uint32_t x = 0; x < width; ++x)
rgba[4 * (y * width + x) + 1] = (255.0f * x* y) / (width * height) * (y % 100 < 50);
uint64_t size = NvPipe_Encode(encoder, rgba.data(), width * 4, compressed.data(), compressed.size(), width, height, false); // last parameter needs to be true for keyframes
av_init_packet(pkt);
pkt->data = compressed.data();
pkt->size = size;
pkt->pts = frameCnt;
if(!memcmp(compressed.data(), "\x00\x00\x00\x01\x67", 5)) {
pkt->flags |= AV_PKT_FLAG_KEY;
}
//stream
fflush(stdout);
// Write the compressed frame into the output
pkt->pts = av_rescale_q(frameCnt, AVRational {1, targetFPS}, stream->time_base);
pkt->dts = pkt->pts;
pkt->stream_index = stream->index;
/* Write the data on the packet to the output format */
av_interleaved_write_frame(output_format_ctx, pkt);
/* Reset the packet */
av_packet_unref(pkt);
}
使用ffplay打开流的.sdp文件如下所示:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
c=IN IP4 127.0.0.1
t=0 0
a=tool:libavformat 58.18.101
m=video 49990 RTP/AVP 96
a=rtpmap:96 H264/90000
a=fmtp:96 packetization-mode=1
答案 0 :(得分:0)
上面的代码不发送关键帧(或I帧)。 (显而易见的)解决方案是通过将NvPipe_Encode()
的最后一个参数设置为true来发送关键帧。要达到一定的GOP大小gop_size
,请执行类似的操作
NvPipe_Encode(encoder, rgba.data(), width * 4, compressed.data(),
compressed.size(), width, height, framecnt % gop_size == 0 ? true : false);