我正在尝试将大约60个keras模型加载到一个命名空间对象中,并且过程确实很慢。我想利用多个核心,因为名称空间对象最终将由Pool进程使用。所以这就是我所做的。
import multiprocessing
from keras.models import load_model
import pdb
from functools import partial
def load(ns, model_name):
folder = directory
model = load_model(folder + model_name + '.h5')
setattr(ns, model_name, model)
print(model_name, ' loaded')
def main():
mgr = multiprocessing.Manager()
ns = mgr.Namespace()
f = partial(load, ns)
multiprocessing.set_start_method('spawn', force = True)
pool = multiprocessing.Pool(processes = multiprocessing.cpu_count())
pool.map_async(f, ['psy_01', 'psy_02', 'psy_03', 'psy_04', 'psy_05', 'psy_06', 'psy_07', 'psy_08'])
pool.close()
pool.join()
print(ns._getvalue())
if __name__ == '__main__':
main()
在这个示例中,我尝试加载8个模型,但似乎没有加载任何内容。