ffmpeg amerge和amix filter delay

时间:2015-10-29 12:42:42

标签: ffmpeg video-streaming filtering ip-camera

我需要从几台IP摄像机中获取音频流并将它们合并为一个文件,这样它们就会发出声音。

我尝试过滤" amix" :(出于测试目的,我从同一台摄像机拍摄了2次音频流。是的,我试过2台摄像机 - 结果是一样的)

ffmpeg -i rtsp://user:pass@172.22.5.202 -i rtsp://user:pass@172.22.5.202 -map 0:a -map 1:a  -filter_complex amix=inputs=2:duration=first:dropout_transition=3  -ar 22050 -vn -f flv rtmp://172.22.45.38:1935/live/stream1

结果:我说"你好"。并在发言者中听到第一个"你好"在1秒内,我听到第二个"你好"。而不是听到两个"你好" s simaltaneousely。

并尝试过滤" amerge":

ffmpeg -i rtsp://user:pass@172.22.5.202 -i rtsp://user:pass@172.22.5.202 -map 0:a -map 1:a  -filter_complex amerge -ar 22050 -vn -f flv rtmp://172.22.45.38:1935/live/stream1

结果:与第一个例子相同,但现在我听到第一个"你好"在左扬声器和1秒内,我听到第二个"你好"在正确的发言人中,而不是在两个发言人中听到两个人的声音。

所以,问题是:如何使它们听起来是正确的?你可能知道一些参数吗?还是其他一些命令?

P.S。如果您需要,可以使用以下两种变体的命令行输出: AMIX:

[root@minjust ~]# ffmpeg -i rtsp://admin:12345@172.22.5.202 -i rtsp://admin:12345@172.22.5.202 -map 0:a -map 1:a -filter_complex amix=inputs=2:duration=longest:dropout_transition=0 -vn -ar 22050 -f flv rtmp://172.22.45.38:1935/live/stream1       ffmpeg version N-76031-g9099079 Copyright (c) 2000-2015 the FFmpeg developers
  built with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-16)
  configuration: --enable-gpl --enable-libx264 --enable-libmp3lame --enable-nonfree --enable-version3
  libavutil      55.  4.100 / 55.  4.100
  libavcodec     57.  6.100 / 57.  6.100
  libavformat    57.  4.100 / 57.  4.100
  libavdevice    57.  0.100 / 57.  0.100
  libavfilter     6. 11.100 /  6. 11.100
  libswscale      4.  0.100 /  4.  0.100
  libswresample   2.  0.100 /  2.  0.100
  libpostproc    54.  0.100 / 54.  0.100
Input #0, rtsp, from 'rtsp://admin:12345@172.22.5.202':
  Metadata:
    title           : Media Presentation
  Duration: N/A, start: 0.032000, bitrate: N/A
    Stream #0:0: Video: h264 (Baseline), yuv420p, 1280x720, 20 fps, 25 tbr, 90k tbn, 40 tbc
    Stream #0:1: Audio: adpcm_g726, 8000 Hz, mono, s16, 16 kb/s
    Stream #0:2: Data: none
Input #1, rtsp, from 'rtsp://admin:12345@172.22.5.202':
  Metadata:
    title           : Media Presentation
  Duration: N/A, start: 0.032000, bitrate: N/A
    Stream #1:0: Video: h264 (Baseline), yuv420p, 1280x720, 20 fps, 25 tbr, 90k tbn, 40 tbc
    Stream #1:1: Audio: adpcm_g726, 8000 Hz, mono, s16, 16 kb/s
    Stream #1:2: Data: none
Output #0, flv, to 'rtmp://172.22.45.38:1935/live/stream1':
  Metadata:
    title           : Media Presentation
    encoder         : Lavf57.4.100
    Stream #0:0: Audio: mp3 (libmp3lame) ([2][0][0][0] / 0x0002), 22050 Hz, mono, fltp (default)
    Metadata:
      encoder         : Lavc57.6.100 libmp3lame
Stream mapping:
  Stream #0:1 (g726) -> amix:input0
  Stream #1:1 (g726) -> amix:input1
  amix -> Stream #0:0 (libmp3lame)
Press [q] to stop, [?] for help
[rtsp @ 0x2689600] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
[rtsp @ 0x2727c60] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
[rtsp @ 0x2689600] max delay reached. need to consume packet
[NULL @ 0x268c500] RTP: missed 38 packets
[rtsp @ 0x2689600] max delay reached. need to consume packet
[NULL @ 0x268d460] RTP: missed 4 packets
[flv @ 0x2958360] Failed to update header with correct duration.
[flv @ 0x2958360] Failed to update header with correct filesize.
size=      28kB time=00:00:06.18 bitrate=  36.7kbits/s
video:0kB audio:24kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 16.331224%

和amerge:

[root@minjust ~]# ffmpeg -i rtsp://admin:12345@172.22.5.202 -i rtsp://admin:12345@172.22.5.202 -map 0:a -map 1:a -filter_complex amerge -vn -ar 22050 -f flv rtmp://172.22.45.38:1935/live/stream1
ffmpeg version N-76031-g9099079 Copyright (c) 2000-2015 the FFmpeg developers
  built with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-16)
  configuration: --enable-gpl --enable-libx264 --enable-libmp3lame --enable-nonfree --enable-version3
  libavutil      55.  4.100 / 55.  4.100
  libavcodec     57.  6.100 / 57.  6.100
  libavformat    57.  4.100 / 57.  4.100
  libavdevice    57.  0.100 / 57.  0.100
  libavfilter     6. 11.100 /  6. 11.100
  libswscale      4.  0.100 /  4.  0.100
  libswresample   2.  0.100 /  2.  0.100
  libpostproc    54.  0.100 / 54.  0.100
Input #0, rtsp, from 'rtsp://admin:12345@172.22.5.202':
  Metadata:
    title           : Media Presentation
  Duration: N/A, start: 0.064000, bitrate: N/A
    Stream #0:0: Video: h264 (Baseline), yuv420p, 1280x720, 20 fps, 25 tbr, 90k tbn, 40 tbc
    Stream #0:1: Audio: adpcm_g726, 8000 Hz, mono, s16, 16 kb/s
    Stream #0:2: Data: none
Input #1, rtsp, from 'rtsp://admin:12345@172.22.5.202':
  Metadata:
    title           : Media Presentation
  Duration: N/A, start: 0.032000, bitrate: N/A
    Stream #1:0: Video: h264 (Baseline), yuv420p, 1280x720, 20 fps, 25 tbr, 90k tbn, 40 tbc
    Stream #1:1: Audio: adpcm_g726, 8000 Hz, mono, s16, 16 kb/s
    Stream #1:2: Data: none
[Parsed_amerge_0 @ 0x3069cc0] No channel layout for input 1
[Parsed_amerge_0 @ 0x3069cc0] Input channel layouts overlap: output layout will be determined by the number of distinct input channels
Output #0, flv, to 'rtmp://172.22.45.38:1935/live/stream1':
  Metadata:
    title           : Media Presentation
    encoder         : Lavf57.4.100
    Stream #0:0: Audio: mp3 (libmp3lame) ([2][0][0][0] / 0x0002), 22050 Hz, stereo, s16p (default)
    Metadata:
      encoder         : Lavc57.6.100 libmp3lame
Stream mapping:
  Stream #0:1 (g726) -> amerge:in0
  Stream #1:1 (g726) -> amerge:in1
  amerge -> Stream #0:0 (libmp3lame)
Press [q] to stop, [?] for help
[rtsp @ 0x2f71640] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
[rtsp @ 0x300fb40] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
[rtsp @ 0x2f71640] max delay reached. need to consume packet
[NULL @ 0x2f744a0] RTP: missed 18 packets
[flv @ 0x3058b00] Failed to update header with correct duration.
[flv @ 0x3058b00] Failed to update header with correct filesize.
size=      39kB time=00:00:04.54 bitrate=  70.2kbits/s
video:0kB audio:36kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 8.330614%

感谢名单。

2015年10月30日更新:连接2台摄像机时我发现了一些有趣的细节(它们有不同的麦克风,我听到它们之间的区别):来自不同摄像机的#34; Hello" s取决于关于输入的顺序。 用命令

ffmpeg -i rtsp://cam2 -i rtsp://cam1 -map 0:a -map 1:a -filter_complex amix=inputs=2:duration=longest:dropout_transition=0 -vn -ar 22050 -f flv rtmp://172.22.45.38:1935/live/stream1

我听到"你好"从第一个凸轮开始,然后在1秒钟内开始#34;你好"从第二个凸轮。

使用命令

ffmpeg -i rtsp://cam1 -i rtsp://cam2 -map 0:a -map 1:a -filter_complex amix=inputs=2:duration=longest:dropout_transition=0 -vn -ar 22050 -f flv rtmp://172.22.45.38:1935/live/stream1

我听到"你好"从第二个凸轮然后在1秒钟内#34;你好"来自1st cam。

所以,据我所知 - ffmpeg输入不是simaltaneouse,而是按照给定的输入顺序。 问题:如何告诉ffmpeg读取输入simaltaneousely?

2 个答案:

答案 0 :(得分:0)

如果将amix与两个本地文件完美配合使用,则无法一次播放两个音频。

当输入来自本地文件或流式传输时,ffmpeg确切知道其开始时间。所以它可以混合成一个音频。

但是当输入来自 Live 流式传输时,ffmpeg并不确切地知道“它何时启动”,因此不同的流式URL中的开始时间应该不同。

更重要的是ffmpeg在句柄输入时不支持并发性。这就是为什么“你好”的顺序取决于输入的顺序。

我只知道解决这个问题的一个解决方案。 Adobe FMLE(Flash Media Live Encoder),在使用RTMP流时支持时间码。您可以从实时流式传输中获取时间码,然后您最终可以将两个音频混合为一个。

也许,您可以从这篇文章开始:http://www.overdigital.com/2013/03/25/3ways-to-sync-data/

答案 1 :(得分:0)

尝试

ffmpeg -i rtsp://user:pass@172.22.5.202 -i rtsp://user:pass@172.22.5.202 \
-filter_complex \
"[0:a]asetpts=PTS-STARTPTS[a1];[1:a]asetpts=PTS-STARTPTS[a2]; \
 [a1][a2]amix=inputs=2:duration=first:dropout_transition=3[a] \
-map [a] -ar 22050 -vn -f flv rtmp://172.22.45.38:1935/live/stream1