在python多处理中分叉后可以创建共享内存吗?

时间:2017-01-16 20:33:42

标签: python multiprocessing fork shared-memory

我认为答案是肯定的。 (在评论here的评论中看到一个假设。)但是,在分支之后能够在进程之间构建一个新的共享(原始)数组,这可能是非常好的,可能使用Pipe / Queue /经理处理设置。我不熟悉操作系统;这种情况有何发生?

任何聪明的解决方法(也许是memmap?)提供与真正的共享数组相同的读写速度?

1 个答案:

答案 0 :(得分:2)

我认为可以通过在现有文件上共享内存映射来完成。只需多次运行下面的示例并观察输出。一旦所有进程都打开了共享文件,您就可以删除磁盘上的文件并继续使用共享内存。这里使用了文件锁,但它可能不是最好的方法。

#!/usr/bin/env python3

import fcntl
import mmap
import os
import time
from contextlib import contextmanager, suppress

# Open (and create if necessary) a file of shared_size.
# Every sleep_timeout seconds:
#   - acquire an exclusive lock,
#   - read the data, and
#   - write new data

# 1kiB for example
shared_size = 1024
filename = "shared_data.bin"
sleep_timeout = 1

# Context manager to grab an exclusive lock on the
# first length bytes of a file and automatically
# release the lock.
@contextmanager
def lockf(fileno, length=0):
  try:
    fcntl.lockf(fileno, fcntl.LOCK_EX, length)
    yield
  finally:
    fcntl.lockf(fileno, fcntl.LOCK_UN, length)

def ensure_filesize(f, size):
  # make sure file is big enough for shared_size
  f.seek(0, os.SEEK_END)
  if f.tell() < size:
    f.truncate(size)

def read_and_update_data(f):
  f.seek(0)
  print(f.readline().decode())
  f.seek(0)
  message = "Hello from process {} at {}".format(os.getpid(), time.asctime(time.localtime()))
  f.write(message.encode())

# Ignore Ctrl-C so we can quit cleanly
with suppress(KeyboardInterrupt):
  # open file for binary read/write and create if necessary
  with open(filename, "a+b") as f:
    ensure_filesize(f, shared_size)

    # Once all processes have opened the file, it can be removed
    #os.remove(filename)

    with mmap.mmap(f.fileno(), shared_size) as mm:
      while True:
        with lockf(f.fileno(), length=shared_size):
          read_and_update_data(mm)
        time.sleep(sleep_timeout)