我在Paperspace云基础架构上创建了虚拟笔记本,并在后端使用Tensorflow GPU P5000虚拟实例。 当我开始训练我的网络时,它比使用纯CPU运行时引擎的我的MacBook Pro上的速度低2倍。 我如何确保Keras NN在培训过程中使用GPU而不是CPU?
请在下面找到我的代码:
from tensorflow.contrib.keras.api.keras.models import Sequential
from tensorflow.contrib.keras.api.keras.layers import Dense
from tensorflow.contrib.keras.api.keras.layers import Dropout
from tensorflow.contrib.keras.api.keras import utils as np_utils
import numpy as np
import pandas as pd
# Read data
pddata= pd.read_csv('data/data.csv', delimiter=';')
# Helper function (prepare & test data)
def split_to_train_test (data):
trainLenght = len(data) - len(data)//10
trainData = data.loc[:trainLenght].sample(frac=1).reset_index(drop=True)
testData = data.loc[trainLenght+1:].sample(frac=1).reset_index(drop=True)
trainLabels = trainData.loc[:,"Label"].as_matrix()
testLabels = testData.loc[:,"Label"].as_matrix()
trainData = trainData.loc[:,"Feature 0":].as_matrix()
testData = testData.loc[:,"Feature 0":].as_matrix()
return (trainData, testData, trainLabels, testLabels)
# prepare train & test data
(X_train, X_test, y_train, y_test) = split_to_train_test (pddata)
# Convert labels to one-hot notation
Y_train = np_utils.to_categorical(y_train, 3)
Y_test = np_utils.to_categorical(y_test, 3)
# Define model in Keras
def create_model(init):
model = Sequential()
model.add(Dense(101, input_shape=(101,), kernel_initializer=init, activation='tanh'))
model.add(Dense(101, kernel_initializer=init, activation='tanh'))
model.add(Dense(101, kernel_initializer=init, activation='tanh'))
model.add(Dense(101, kernel_initializer=init, activation='tanh'))
model.add(Dense(3, kernel_initializer=init, activation='softmax'))
return model
# Train the model
uniform_model = create_model("glorot_normal")
uniform_model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
uniform_model.fit(X_train, Y_train, batch_size=1, epochs=300, verbose=1, validation_data=(X_test, Y_test))
答案 0 :(得分:4)
您需要在TensorFlow会话中设置log_device_placement = True
来运行您的网络(下面的示例代码中的最后一行)。有趣的是,如果您在会话中设置它,它仍然适用于Keras拟合。所以下面的代码(测试过的)会输出每个张量的位置。请注意,由于您的数据不可用,我已将数据读取短路,因此我只是使用随机数据运行网络。这种代码是自包含的,任何人都可以运行。另一个注意事项:如果你从Jupyter Notebook运行它,log_device_placement
的输出将转到 终端,其中启动了Jupyter Notebook ,而不是笔记本单元的输出
from tensorflow.contrib.keras.api.keras.models import Sequential
from tensorflow.contrib.keras.api.keras.layers import Dense
from tensorflow.contrib.keras.api.keras.layers import Dropout
from tensorflow.contrib.keras.api.keras import utils as np_utils
import numpy as np
import pandas as pd
import tensorflow as tf
# Read data
#pddata=pd.read_csv('data/data.csv', delimiter=';')
pddata = "foobar"
# Helper function (prepare & test data)
def split_to_train_test (data):
return (
np.random.uniform( size = ( 100, 101 ) ),
np.random.uniform( size = ( 100, 101 ) ),
np.random.randint( 0, size = ( 100 ), high = 3 ),
np.random.randint( 0, size = ( 100 ), high = 3 )
)
trainLenght = len(data) - len(data)//10
trainData = data.loc[:trainLenght].sample(frac=1).reset_index(drop=True)
testData = data.loc[trainLenght+1:].sample(frac=1).reset_index(drop=True)
trainLabels = trainData.loc[:,"Label"].as_matrix()
testLabels = testData.loc[:,"Label"].as_matrix()
trainData = trainData.loc[:,"Feature 0":].as_matrix()
testData = testData.loc[:,"Feature 0":].as_matrix()
return (trainData, testData, trainLabels, testLabels)
# prepare train & test data
(X_train, X_test, y_train, y_test) = split_to_train_test (pddata)
# Convert labels to one-hot notation
Y_train = np_utils.to_categorical(y_train, 3)
Y_test = np_utils.to_categorical(y_test, 3)
# Define model in Keras
def create_model(init):
model = Sequential()
model.add(Dense(101, input_shape=(101,), kernel_initializer=init, activation='tanh'))
model.add(Dense(101, kernel_initializer=init, activation='tanh'))
model.add(Dense(101, kernel_initializer=init, activation='tanh'))
model.add(Dense(101, kernel_initializer=init, activation='tanh'))
model.add(Dense(3, kernel_initializer=init, activation='softmax'))
return model
# Train the model
uniform_model = create_model("glorot_normal")
uniform_model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['accuracy'])
with tf.Session( config = tf.ConfigProto( log_device_placement = True ) ):
uniform_model.fit(X_train, Y_train, batch_size=1, epochs=300, verbose=1, validation_data=(X_test, Y_test))
终端输出(部分,太长了):
...
VarIsInitializedOp_13:(VarIsInitializedOp):/ job:localhost / replica:0 / task:0 / device:GPU:0
2018-04-21 21:54:33.485870:I tensorflow / core / common_runtime / placer.cc:884]
VarIsInitializedOp_13:(VarIsInitializedOp)/ job:localhost / replica:0 / task:0 / device:GPU:0
training / SGD / mul_18 / ReadVariableOp:(ReadVariableOp):/ job:localhost / replica:0 / task:0 / device:GPU:0
2018-04-21 21:54:33.485895:I tensorflow / core / common_runtime / placer.cc:884]
training / SGD / mul_18 / ReadVariableOp:(ReadVariableOp)/ job:localhost / replica:0 / task:0 / device:GPU:0
training / SGD / Variable_9 / Read / ReadVariableOp:(ReadVariableOp):/ job:localhost / replica:0 / task:0 / device:GPU:0
2018-04-21 21:54:33.485903:I tensorflow / core / common_runtime / placer.cc:884]
training / SGD / Variable_9 / Read / ReadVariableOp:(ReadVariableOp)/ job:localhost / replica:0 / task:0 / device:GPU:0
......
请注意多行末尾的 GPU:0 。
Tensorflow手册的相关页面:Using GPU: Logging Device Placement。
答案 1 :(得分:1)
将此内容放置在Jupyter笔记本顶部附近。注释掉不需要的内容。
# confirm TensorFlow sees the GPU
from tensorflow.python.client import device_lib
assert 'GPU' in str(device_lib.list_local_devices())
# confirm Keras sees the GPU (for TensorFlow 1.X + Keras)
from keras import backend
assert len(backend.tensorflow_backend._get_available_gpus()) > 0
# confirm PyTorch sees the GPU
from torch import cuda
assert cuda.is_available()
assert cuda.device_count() > 0
print(cuda.get_device_name(cuda.current_device()))
注意:随着TensorFlow 2.0的发布,Keras现在已包含在TF API中。
最初答对了here。
答案 2 :(得分:0)
考虑到keras自2.0版本以来是tensorflow的内置功能:
import tensorflow as tf
tf.test.is_built_with_cuda()
tf.test.is_gpu_available(cuda_only = True)
注意:后一种方法可能需要几分钟才能运行。