我需要一些帮助,使用FFmpeg C api准确地修剪视频

时间:2015-03-20 19:29:21

标签: c video ffmpeg

我需要一些帮助,使用FFmpeg C api准确地修剪视频。 我所看到的是当输入视频流的time_base为1/48000时,它正确地选择了修剪的开始和结束时间。但是,当输入视频流的time_base不是1/48000时,修剪不正确。

1/30000和1/24000的时基修剪预期视频流长度的一半 - 10s而不是20s。 1/25修剪几乎没有任何东西 - 输出文件大小只有几kb。

音频流似乎始终被正确修剪。

例如,如果我尝试修剪视频流时基为1/30000的视频的前20个,则输出mp4的长度为20秒。它有前10个视频和前20个音频。

认为我错误地计算了end_time,但我不确定为什么它对于1/48000 time_base流是正确的。

record[i].start_time = av_rescale_q((int64_t)( start_time * AV_TIME_BASE ), default_timebase, in_stream->time_base);
record[i].end_time = av_rescale_q((int64_t)( end_time   * AV_TIME_BASE ), default_timebase, in_stream->time_base);

以下是更完整的代码示例:

int num_of_streams = ifmt_ctx->nb_streams;
if(num_of_streams > 0) {
    // keeps track of each stream's trimmed start and end times
    struct stream_pts record[num_of_streams];

    for (i = 0; i < num_of_streams; i++) {
        AVStream *in_stream = ifmt_ctx->streams[i];
        AVStream *out_stream = avformat_new_stream(ofmt_ctx, in_stream->codec->codec);
        if (!out_stream) {
            LOGE("=> Failed allocating output stream");
            ret = AVERROR_UNKNOWN;
            return close_connection(ret);
        }

        ret = avcodec_copy_context(out_stream->codec, in_stream->codec);
        if (ret < 0) {
            LOGE("=> Failed to copy context from input to output stream codec context");
            return close_connection(ret);
        }
        out_stream->codec->codec_tag = 0;
        if (ofmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)
            out_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;

        AVRational default_timebase;
        default_timebase.num = 1;
        default_timebase.den = AV_TIME_BASE;

        // determine start/end times for each stream
        record[i].index = i;
        record[i].start_time = av_rescale_q((int64_t)( start_time * AV_TIME_BASE ), default_timebase, in_stream->time_base);
        record[i].end_time = av_rescale_q((int64_t)( end_time   * AV_TIME_BASE ), default_timebase, in_stream->time_base);
    }

    av_dump_format(ofmt_ctx, 0, output_file, 1);

    if (!(ofmt->flags & AVFMT_NOFILE)) {
        ret = avio_open(&ofmt_ctx->pb, output_file, AVIO_FLAG_WRITE);
        if (ret < 0) {
            LOGE("=> Could not open output file '%s'", output_file);
            return close_connection(ret);
        }
    }

    ret = avformat_write_header(ofmt_ctx, NULL);
    if (ret < 0) {
        LOGE("=> Error occurred when opening output file");
        return close_connection(ret);
    }

    while (1) {
        AVStream *in_stream, *out_stream;

        ret = av_read_frame(ifmt_ctx, &pkt);
        if (ret < 0)
            break;

        in_stream  = ifmt_ctx->streams[pkt.stream_index];
        out_stream = ofmt_ctx->streams[pkt.stream_index];

        // copy packet
        pkt.pts = av_rescale_q_rnd(pkt.pts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
        pkt.dts = av_rescale_q_rnd(pkt.dts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
        pkt.duration = av_rescale_q(pkt.duration, in_stream->time_base, out_stream->time_base);
        pkt.pos = -1;

        // write the frames we're looking for
        if(pkt.pts >= record[pkt.stream_index].start_time && pkt.pts <= record[pkt.stream_index].end_time) {
            ret = av_interleaved_write_frame(ofmt_ctx, &pkt);
            if (ret < 0) {
                LOGE("=> Error muxing packet");
                break;
            }
        }

        av_free_packet(&pkt);
    }
}

0 个答案:

没有答案