我的Keras版本是2.0.9,并使用tensorflow后端。
我尝试在keras中实现multi_gpu_model。然而,在实践中,4 gpus的训练甚至比1 gpu更差。我为1 gpu获得25秒,为4 gpus获得50秒。你能告诉我为什么会这样吗?
/ blog for multi_gpu_model
我对1 gpu使用了这个推荐
CUDA_VISIBLE_DEVICES=0 python gpu_test.py
和4 gpus,
python gpu_test.py
- 这是培训的源代码。
from keras.datasets import mnist
from keras.layers import Input, Dense, merge
from keras.layers.core import Lambda
from keras.models import Model
from keras.utils import to_categorical
from keras.utils.training_utils import multi_gpu_model
import time
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
inputs = Input(shape=(784,))
x = Dense(4096, activation='relu')(inputs)
x = Dense(2048, activation='relu')(x)
x = Dense(512, activation='relu')(x)
x = Dense(64, activation='relu')(x)
predictions = Dense(10, activation='softmax')(x)
model = Model(inputs=inputs, outputs=predictions)
'''
m_model = multi_gpu_model(model, 4)
m_model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
m_model.summary()
a=time.time()
m_model.fit(x_train, y_train, batch_size=128, epochs=5)
print time.time() - a
a=time.time()
m_model.predict(x=x_test, batch_size=128)
print time.time() - a
'''
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
a=time.time()
model.fit(x_train, y_train, batch_size=128, epochs=5)
print time.time() - a
a=time.time()
model.predict(x=x_test, batch_size=128)
print time.time() - a
答案 0 :(得分:1)
我能告诉你我认为的答案,但我自己并没有完全发挥作用。我通过bug report向我倾斜,但在source code for multi_gpu_model中它说:
# Instantiate the base model (or "template" model).
# We recommend doing this with under a CPU device scope,
# so that the model's weights are hosted on CPU memory.
# Otherwise they may end up hosted on a GPU, which would
# complicate weight sharing.
with tf.device('/cpu:0'):
model = Xception(weights=None,
input_shape=(height, width, 3),
classes=num_classes)
我认为这是问题所在。不过,我仍然在努力让自己发挥作用。