我想动态操作视频以显示从外部源提取的数字。
如果我只是drawText过滤器,这很有效。不幸的是,客户希望覆盖层看起来更漂亮。我有一个图像,其中包含所需格式的数字,我可以使用它来动态生成包含该数字的图像。但是,我不确定如何使用FFMPEG覆盖滤镜将此图像与视频合并。
我已经在命令行中找到了这样的示例,其命令如下:
ffmpeg -itsoffset 5 -i in.mp4 -r 25 -loop 1 -i intro.png -filter_complex "[1:v] fade=out:125:25:alpha=1 [intro]; [0:v][intro] overlay [v]" -map "[v]" -map 0:a -acodec copy out.mp4
主要问题是我无法使用命令行工具。我需要动态操作视频(视频数据被发送到第三方流媒体库)。我使用drawtext的工作解决方案只需要一个源(原始视频)。对于叠加层,我必须以某种方式指定第二个源(图像),我不知道如何做到这一点。
目前我使用此方法(取自FFMPEG过滤器示例)来创建过滤器:
static int init_filters(AVFormatContext *fmt_ctx,
AVCodecContext *dec_ctx,
AVFilterContext **buffersink_ctx,
AVFilterContext **buffersrc_ctx,
AVFilterGraph **filter_graph,
int video_stream_index,
const char *filters_descr)
{
char args[512];
int ret = 0;
AVFilter *buffersrc = avfilter_get_by_name("buffer");
AVFilter *buffersink = avfilter_get_by_name("buffersink");
AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs = avfilter_inout_alloc();
AVRational time_base = fmt_ctx->streams[video_stream_index]->time_base;
enum AVPixelFormat pix_fmts[] = { dec_ctx->pix_fmt, AV_PIX_FMT_NONE };
*filter_graph = avfilter_graph_alloc();
if (!outputs || !inputs || !filter_graph) {
ret = AVERROR(ENOMEM);
goto end;
}
/* buffer video source: the decoded frames from the decoder will be inserted here. */
snprintf(args, sizeof(args),
"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
dec_ctx->width, dec_ctx->height, dec_ctx->pix_fmt,
time_base.num, time_base.den,
dec_ctx->sample_aspect_ratio.num, dec_ctx->sample_aspect_ratio.den);
ret = avfilter_graph_create_filter(buffersrc_ctx, buffersrc, "in",
args, NULL, *filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer source\n");
goto end;
}
/* buffer video sink: to terminate the filter chain. */
ret = avfilter_graph_create_filter(buffersink_ctx, buffersink, "out",
NULL, NULL, *filter_graph);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot create buffer sink\n");
goto end;
}
ret = av_opt_set_int_list(*buffersink_ctx, "pix_fmts", pix_fmts,
AV_PIX_FMT_NONE, AV_OPT_SEARCH_CHILDREN);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Cannot set output pixel format\n");
goto end;
}
/*
* Set the endpoints for the filter graph. The filter_graph will
* be linked to the graph described by filters_descr.
*/
/*
* The buffer source output must be connected to the input pad of
* the first filter described by filters_descr; since the first
* filter input label is not specified, it is set to "in" by
* default.
*/
outputs->name = av_strdup("in");
outputs->filter_ctx = *buffersrc_ctx;
outputs->pad_idx = 0;
outputs->next = NULL;
/*
* The buffer sink input must be connected to the output pad of
* the last filter described by filters_descr; since the last
* filter output label is not specified, it is set to "out" by
* default.
*/
inputs->name = av_strdup("out");
inputs->filter_ctx = *buffersink_ctx;
inputs->pad_idx = 0;
inputs->next = NULL;
if ((ret = avfilter_graph_parse_ptr(*filter_graph, filters_descr,
&inputs, &outputs, NULL)) < 0)
goto end;
if ((ret = avfilter_graph_config(*filter_graph, NULL)) < 0)
goto end;
end:
avfilter_inout_free(&inputs);
avfilter_inout_free(&outputs);
return ret;
}
我可以使用与控制台上使用的相同类型的过滤字符串。此代码稍后将过滤器应用于当前帧(this-&gt; pVideoFrame):
if (av_buffersrc_add_frame_flags(buffersrc_ctx, this->pVideoFrame, AV_BUFFERSRC_FLAG_KEEP_REF) < 0) {
av_log(NULL, AV_LOG_ERROR, "Error while feeding the filtergraph\n");
break;
}
/* pull filtered frames from the filtergraph */
while (1) {
int ret = av_buffersink_get_frame(buffersink_ctx, this->pVideoFilterFrame);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
{
break;
}
}
为了使这个过滤器与叠加过滤器规范一起使用,我需要以某种方式将图像放入系统,我不知道如何做到这一点。
理想情况下,我想用内存缓冲区为系统提供信息。但是,如果有必要,我可以先将生成的图像写入tempfile。在数字更改之前,图像将至少有效一秒钟。使用FFMPEG来操作和传输图像的C ++库是更大的.NET项目的一部分。
任何帮助都将不胜感激。
此致 路德维希