可以将图像的某些部分合并到FFMPEG C库中的一个组合输出图像。
与Java Graphics Graphics.drawImage (Image, x, y, null);
答案 0 :(得分:1)
是和否。 FFMEG有一个检索“帧”的接口。然后,您可以将帧作为内存中的像素缓冲区进行访问,并根据需要进行处理,包括将一帧与前一帧相结合,或者从两个视频源中获取帧并构建组合图像,其中一个源是另一个源的窗口。 但FFMEG不会为你做那件事。
以下是您阅读框架的示例代码。 https://ffmpeg.org/doxygen/3.1/demuxing_decoding_8c-example.html
if (pkt.stream_index == video_stream_idx) {
/* decode video frame */
ret = avcodec_decode_video2(video_dec_ctx, frame, got_frame, &pkt);
if (ret < 0) {
fprintf(stderr, "Error decoding video frame (%s)\n", av_err2str(ret));
return ret;
}
if (*got_frame) {
if (frame->width != width || frame->height != height ||
frame->format != pix_fmt) {
/* To handle this change, one could call av_image_alloc again and
* decode the following frames into another rawvideo file. */
fprintf(stderr, "Error: Width, height and pixel format have to be "
"constant in a rawvideo file, but the width, height or "
"pixel format of the input video changed:\n"
"old: width = %d, height = %d, format = %s\n"
"new: width = %d, height = %d, format = %s\n",
width, height, av_get_pix_fmt_name(pix_fmt),
frame->width, frame->height,
av_get_pix_fmt_name(frame->format));
return -1;
}
printf("video_frame%s n:%d coded_n:%d pts:%s\n",
cached ? "(cached)" : "",
video_frame_count++, frame->coded_picture_number,
av_ts2timestr(frame->pts, &video_dec_ctx->time_base));
/* copy decoded frame to destination buffer:
* this is required since rawvideo expects non aligned data */
av_image_copy(video_dst_data, video_dst_linesize,
(const uint8_t **)(frame->data), frame->linesize,
pix_fmt, width, height);
/* write to rawvideo file */
fwrite(video_dst_data[0], 1, video_dst_bufsize, video_dst_file);
}
}
用你对缓冲区的操作替换fwrite调用。尝试交换 红色和绿色作为第一个测试,看你可以以任何你想要的方式操纵缓冲区并获得信心。