我使用基于最新FFmpeg git源代码树的项目,并链接到Zeranoe在https://ffmpeg.zeranoe.com/builds/发布的共享DLL
播放代码工作和循环。它播放h265文件(原始),mpeg,avi和mpg文件。但是,只要将mp4或mkv容器指定为输入文件,无论内部是什么,都会从编解码器中转储出许多错误。如果它是HEVC或h264则无关紧要。
[h264 @ 00000000xyz] No start code is found
[h264 @ 00000000xyz] Error splitting the input into NAL units.
为了让一切都变得奇怪,ffplay.exe可以很好地播放这些文件。
我意识到我可以通过首先将文件转换为原始格式来解决这个问题,但我希望能够读取和解析它们的mp4文件。由于我正在使用Zeraneo的预构建库,我的猜测是在构建期间没有启用某些东西,但是我希望ffplay也会失败。我需要在format_context或codec_context中设置一个标志,还是提供某种过滤器标识符?
播放得很好的电影来自http://bbb3d.renderfarming.net/download.html,http://www.w6rz.net/和http://www.sample-videos.com/
这些工作:
big_buck_bunny_480p_surround-fix.avi
bigbuckbunny_480x272.h265
作为ffmpeg的总菜鸟,请帮助我了解错误以及如何解决问题。如果预构建库是罪魁祸首,那么第二个问题是,是否有人有一个方便的cmake设置来为Windows X64和x32调试和发布目标构建它。
这里是初始化ffmpeg以进行阅读的来源
avdevice_register_all();
avfilter_register_all();
av_register_all();
avformat_network_init();
格式解析如下:
m_FormatContext = avformat_alloc_context();
if (avformat_open_input(&m_FormatContext, file.GetPath().ToString().c_str(), NULL, NULL) != 0)
{
//std::cout << "failed to open input" << std::endl;
success = false;
}
// find stream info
if (success)
{
if (avformat_find_stream_info(m_FormatContext, NULL) < 0)
{
//std::cout << "failed to get stream info" << std::endl;
success = false;
}
}
流按以下方式打开:
m_VideoStream = avstream;
m_FormatContext = formatContext;
if (m_VideoStream)
{
m_StreamIndex = m_VideoStream->stream_identifier;
AVCodecParameters *codecpar = m_VideoStream->codecpar;
if (codecpar)
{
AVCodecID codec_id = codecpar->codec_id;
m_Decoder = avcodec_find_decoder(codec_id);
if (m_Decoder)
{
m_CodecContext = avcodec_alloc_context3(m_Decoder);
if (m_CodecContext)
{
m_CodecContext->width = codecpar->width;
m_CodecContext->height = codecpar->height;
m_VideoSize = i3(codecpar->width, codecpar->height,1);
success = 0 == avcodec_open2(m_CodecContext, m_Decoder, NULL);
if (success)
{
if(m_CodecContext)
{
int size = av_image_get_buffer_size(format, m_CodecContext->width, m_CodecContext->height, 1);
if (size > 0)
{
av_frame = av_frame_alloc();
gl_frame = av_frame_alloc();
uint8_t *internal_buffer = (uint8_t *)av_malloc(size * sizeof(uint8_t));
av_image_fill_arrays((uint8_t**)((AVPicture *)gl_frame->data), (int*) ((AVPicture *)gl_frame->linesize), internal_buffer, format, m_CodecContext->width, m_CodecContext->height,1);
m_Packet = (AVPacket *)av_malloc(sizeof(AVPacket));
}
}
}
if (!success)
{
avcodec_close(m_CodecContext);
avcodec_free_context(&m_CodecContext);
m_CodecContext = NULL;
m_Decoder = NULL;
m_VideoStream = NULL;
}
}
else
{
m_Decoder = NULL;
m_VideoStream = NULL;
}
}
}
}
在单个线程上解码:
do
{
if (av_read_frame(m_FormatContext, m_Packet) < 0)
{
av_packet_unref(m_Packet);
m_AllPacketsSent = true;
}
else
{
if (m_Packet->stream_index == m_StreamIndex)
{
avcodec_send_packet(m_CodecContext, m_Packet);
}
}
int frame_finished = avcodec_receive_frame(m_CodecContext, av_frame);
if (frame_finished == 0)
{
if (!conv_ctx)
{
conv_ctx = sws_getContext(m_CodecContext->width,
m_CodecContext->height, m_CodecContext->pix_fmt,
m_CodecContext->width, m_CodecContext->height, format, SWS_BICUBIC, NULL, NULL, NULL);
}
sws_scale(conv_ctx, av_frame->data, av_frame->linesize, 0, m_CodecContext->height, gl_frame->data, gl_frame->linesize);
switch(format)
{
case AV_PIX_FMT_BGR32_1:
case AV_PIX_FMT_RGB32_1:
case AV_PIX_FMT_0BGR32:
case AV_PIX_FMT_0RGB32:
case AV_PIX_FMT_BGR32:
case AV_PIX_FMT_RGB32:
{
m_CodecContext->bits_per_raw_sample = 32; break;
}
default:
{
FWASSERT(format == AV_PIX_FMT_RGB32, "The format changed, update the bits per raw sample!"); break;
}
}
size_t bufferSize = m_CodecContext->width * m_CodecContext->height * m_CodecContext->bits_per_raw_sample / 8;
m_Buffer.Realloc(bufferSize, false, gl_frame->data[0]);
m_VideoSize = i3(m_CodecContext->width, m_CodecContext->height,1);
result = true;
// sends the image buffer straight to the locked texture here..
// glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, codec_ctx->width, codec_ctx->height, GL_RGB, GL_UNSIGNED_BYTE, gl_frame->data[0]);
}
av_packet_unref(m_Packet);
} while (m_Packet->stream_index != m_StreamIndex);
m_FrameDecoded = result;
非常感谢任何见解!
答案 0 :(得分:3)
而不是在这里隐式提供宽度和高度:
m_CodecContext->width = codecpar->width;
m_CodecContext->height = codecpar->height;
答案 1 :(得分:0)
为将要遇到的任何人添加更多的解释:mkv容器除了存储帧之外还存储SPS / PPS数据,因此默认的解码器上下文构造始终会导致NAL搜索错误。
Read H264 SPS & PPS NAL bytes using libavformat APIs
如果由于某些代码/体系结构问题,您真的没有获得AVCodecParameters的运气-您必须手动填充AVCodecContext-> extradata,并指定h264流解析器所需的SPS / PPS字段。
How to fill 'extradata' field of AVCodecContext with SPS and PPS data?