加速matplotlib动画到视频文件

时间:2015-06-21 14:03:26

标签: python matplotlib ffmpeg raspberry-pi raspbian

在Raspbian(Raspberry Pi 2)上,从我的脚本中删除的以下最小示例正确生成了mp4文件:

import numpy as np
import matplotlib.pyplot as plt
from matplotlib import animation

def anim_lift(x, y):

    #set up the figure
    fig = plt.figure(figsize=(15, 9))

    def animate(i):
        # update plot
        pointplot.set_data(x[i], y[i])

        return  pointplot

    # First frame
    ax0 = plt.plot(x,y)
    pointplot, = ax0.plot(x[0], y[0], 'or')

    anim = animation.FuncAnimation(fig, animate, repeat = False,
                                   frames=range(1,len(x)), 
                                   interval=200,
                                   blit=True, repeat_delay=1000)

    anim.save('out.mp4')
    plt.close(fig)

# Number of frames
nframes = 200

# Generate data
x = np.linspace(0, 100, num=nframes)
y = np.random.random_sample(np.size(x))

anim_lift(x, y)

现在,该文件的制作质量很好,文件很小,但制作170帧电影需要15分钟,这对我的应用来说是不可接受的。我正在寻找显着的加速,视频文件大小增加不是问题。

我认为视频制作的瓶颈在于以png格式暂时保存帧。在处理过程中,我可以在工作目录中看到png文件,CPU负载仅为25%。

请建议一个解决方案,该解决方案也可能基于不同的方案,而不仅仅是matplotlib.animation,例如OpenCV(无论如何已在我的项目中导入)或moviepy。< / p>

正在使用的版本:

  • python 2.7.3
  • matplotlib 1.1.1rc2
  • ffmpeg 0.8.17-6:0.8.17-1 + rpi1

3 个答案:

答案 0 :(得分:4)

将动画保存到文件的瓶颈在于使用figure.savefig()。这是matplotlib FFMpegWriter的自制子类,灵感来自gaggio的答案。它不使用savefig(因而忽略savefig_kwargs),但对动画脚本的任何变化只需要很少的更改。

from matplotlib.animation import FFMpegWriter

class FasterFFMpegWriter(FFMpegWriter):
    '''FFMpeg-pipe writer bypassing figure.savefig.'''
    def __init__(self, **kwargs):
        '''Initialize the Writer object and sets the default frame_format.'''
        super().__init__(**kwargs)
        self.frame_format = 'argb'

    def grab_frame(self, **savefig_kwargs):
        '''Grab the image information from the figure and save as a movie frame.

        Doesn't use savefig to be faster: savefig_kwargs will be ignored.
        '''
        try:
            # re-adjust the figure size and dpi in case it has been changed by the
            # user.  We must ensure that every frame is the same size or
            # the movie will not save correctly.
            self.fig.set_size_inches(self._w, self._h)
            self.fig.set_dpi(self.dpi)
            # Draw and save the frame as an argb string to the pipe sink
            self.fig.canvas.draw()
            self._frame_sink().write(self.fig.canvas.tostring_argb()) 
        except (RuntimeError, IOError) as e:
            out, err = self._proc.communicate()
            raise IOError('Error saving animation to file (cause: {0}) '
                      'Stdout: {1} StdError: {2}. It may help to re-run '
                      'with --verbose-debug.'.format(e, out, err)) 

我能够在一半时间内创建动画,或者使用默认FFMpegWriter创建动画。

您可以按照this example中的说明使用is。

答案 1 :(得分:2)

一个经过改进的解决方案基于this post的答案,将时间缩短了大约10倍。

import numpy as np
import matplotlib.pylab as plt
import matplotlib.animation as animation
import subprocess

def testSubprocess(x, y):

    #set up the figure
    fig = plt.figure(figsize=(15, 9))
    canvas_width, canvas_height = fig.canvas.get_width_height()

    # First frame
    ax0 = plt.plot(x,y)
    pointplot, = plt.plot(x[0], y[0], 'or')

    def update(frame):
        # your matplotlib code goes here
        pointplot.set_data(x[frame],y[frame])

    # Open an ffmpeg process
    outf = 'testSubprocess.mp4'
    cmdstring = ('ffmpeg', 
                 '-y', '-r', '1', # overwrite, 1fps
                 '-s', '%dx%d' % (canvas_width, canvas_height), # size of image string
                 '-pix_fmt', 'argb', # format
                 '-f', 'rawvideo',  '-i', '-', # tell ffmpeg to expect raw video from the pipe
                 '-vcodec', 'mpeg4', outf) # output encoding
    p = subprocess.Popen(cmdstring, stdin=subprocess.PIPE)

    # Draw frames and write to the pipe
    for frame in range(nframes):
        # draw the frame
        update(frame)
        fig.canvas.draw()

        # extract the image as an ARGB string
        string = fig.canvas.tostring_argb()

        # write to pipe
        p.stdin.write(string)

    # Finish up
    p.communicate()

# Number of frames
nframes = 200

# Generate data
x = np.linspace(0, 100, num=nframes)
y = np.random.random_sample(np.size(x))

testSubprocess(x, y)

我怀疑通过将原始图像数据传输到gstreamer可以获得进一步的加速,gstreamer现在可以从Raspberry Pi上的硬件编码中受益,请参阅this discussion

答案 2 :(得分:0)

你应该可以使用其中一个将直接传输到ffmpeg的编写器,但其他东西是非常错误的。

import matplotlib.pyplot as plt
from matplotlib import animation


def anim_lift(x, y):

    #set up the figure
    fig, ax = plt.subplots(figsize=(15, 9))

    def animate(i):
        # update plot
        pointplot.set_data(x[i], y[i])

        return [pointplot, ]

    # First frame
    pointplot, = ax.plot(x[0], y[0], 'or')
    ax.set_xlim([0, 200])
    ax.set_ylim([0, 200])
    anim = animation.FuncAnimation(fig, animate, repeat = False,
                                   frames=range(1,len(x)),
                                   interval=200,
                                   blit=True, repeat_delay=1000)

    anim.save('out.mp4')
    plt.close(fig)


x = list(range(170))
y = list(range(170))
anim_lift(x, y)

将其保存为test.py(这是我认为实际运行的代码的清理版本,因为plt.plot返回line2D对象列表,而列表没有plot方法)给出:

(dd_py3k) ✔ /tmp 
14:45 $ time python test.py

real    0m7.724s
user    0m9.887s
sys     0m0.547s