(关于ffmpeg我是新手)。 我有一个图像源,该图像源以30 fps的速率将文件保存到给定的文件夹中。我想等待(假设)每30帧的数据块,将其编码为h264并通过RDP将其流式传输到其他应用程序。
我考虑过编写一个python应用程序,该应用程序仅等待图像,然后执行ffmpeg命令。为此,我编写了以下代码:
main.py:
import os
import Helpers
import argparse
import IniParser
import subprocess
from functools import partial
from Queue import Queue
from threading import Semaphore, Thread
def Run(config):
os.chdir(config.Workdir)
iteration = 1
q = Queue()
Thread(target=RunProcesses, args=(q, config.AllowedParallelRuns)).start()
while True:
Helpers.FileCount(config.FramesPathPattern, config.ChunkSize * iteration)
command = config.FfmpegCommand.format(startNumber = (iteration-1)*config.ChunkSize, vFrames=config.ChunkSize)
runFunction = partial(subprocess.Popen, command)
q.put(runFunction)
iteration += 1
def RunProcesses(queue, semaphoreSize):
semaphore = Semaphore(semaphoreSize)
while True:
runFunction = queue.get()
Thread(target=HandleProcess, args=(runFunction, semaphore)).start()
def HandleProcess(runFunction, semaphore):
semaphore.acquire()
p = runFunction()
p.wait()
semaphore.release()
if __name__ == '__main__':
argparser = argparse.ArgumentParser()
argparser.add_argument("config", type=str, help="Path for the config file")
args = argparser.parse_args()
iniFilePath = args.config
config = IniParser.Parse(iniFilePath)
Run(config)
Helpers.py(无关紧要):
import os
import time
from glob import glob
def FileCount(pattern, count):
count = int(count)
lastCount = 0
while True:
currentCount = glob(pattern)
if lastCount != currentCount:
lastCount = currentCount
if len(currentCount) >= count and all([CheckIfClosed(f) for f in currentCount]):
break
time.sleep(0.05)
def CheckIfClosed(filePath):
try:
os.rename(filePath, filePath)
return True
except:
return False
我使用了以下配置文件:
Workdir = "C:\Developer\MyProjects\Streaming\OutputStream\PPM"
; Workdir is the directory of reference from which all paths are relative to.
; You may still use full paths if you wish.
FramesPathPattern = "F*.ppm"
; The path pattern (wildcards allowed) where the rendered images are stored to.
; We use this pattern to detect how many rendered images are available for streaming.
; When a chunk of frames is ready - we stream it (or store to disk).
ChunkSize = 30 ; Number of frames for bulk.
; ChunkSize sets the number of frames we need to wait for, in order to execute the ffmpeg command.
; If the folder already contains several chunks, it will first process the first chunk, then second, and so on...
AllowedParallelRuns = 1 ; Number of parallel allowed processes of ffmpeg.
; This sets how many parallel ffmpeg processes are allowed.
; If more than one chunk is available in the folder for processing, we will execute several ffmpeg processes in parallel.
; Only when on of the processes will finish, we will allow another process execution.
FfmpegCommand = "ffmpeg -re -r 30 -start_number {startNumber} -i F%08d.ppm -vframes {vFrames} -vf vflip -f rtp rtp://127.0.0.1:1234" ; Command to execute when a bulk is ready for streaming.
; Once a chunk is ready for processing, this is the command that will be executed (same as running it from the terminal).
; There is however a minor difference. Since every chunk starts with a different frame number, you can use the
; expression of "{startNumber}" which will automatically takes the value of the matching start frame number.
; You can also use "{vFrames}" as an expression for the ChunkSize which was set above in the "ChunkSize" entry.
请注意,如果我设置“ AllowedParallelRuns = 2”,那么它将允许多个ffmpeg进程同时运行。
然后我尝试用ffplay播放它,看看我是否做得对。
第一块数据流很好。以下代码块不是很好。我收到很多[sdp @ 0000006de33c9180] RTP: dropping old packet received too late
消息。
我应该怎么做才能使ffplay按传入图像的顺序播放?运行并行ffmpeg进程是否正确?我的问题有更好的解决方案吗?
谢谢!
答案 0 :(得分:0)
正如我在评论中所述,由于您每次都重新运行ffmpeg,因此会重置pts值,但是客户端将其视为单个连续的ffmpeg流,因此期望增加PTS值。
正如我说过的,您可以使用ffmpeg python包装器自己控制流式传输,但是,是的,这是相当多的代码。但是,实际上有一个肮脏的解决方法。
因此,显然有一个num_layers
参数,您可以使用它来抵消输入时间戳(请参阅FFmpeg documentation)。由于您知道并控制速率,因此可以通过此参数传递一个递增的值,以便每个下一个流都以适当的持续时间进行偏移。例如。如果您每次流式传输30帧,并且您知道fps为30,则这30帧将创建一秒的时间间隔。因此,在每次调用ffmepg时,您都会将model {
ssd {
anchor_generator {
ssd_anchor_generator {
num_layers: <num_of_feature_maps>
...
}
}
}
}
的值增加一秒钟,因此应将其添加到输出的PTS值中。但是我不能保证这行得通。
由于关于-itsoffset
的想法行不通,因此您也可以尝试通过stdin将jpeg图像馈送到ffmpeg-请参见this链接。