我正在参加CS 20SI:来自斯坦福大学的深度学习研究的Tensorflow。我对以下代码有疑问:
import time
import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
# Step 1: Read in data
# using TF Learn's built in function to load MNIST data to the folder data/mnist
MNIST = input_data.read_data_sets("/data/mnist", one_hot=True)
# Batched logistic regression
learning_rate = 0.01
batch_size = 128
n_epochs = 25
X = tf.placeholder(tf.float32, [batch_size, 784], name = 'image')
Y = tf.placeholder(tf.float32, [batch_size, 10], name = 'label')
#w = tf.Variable(tf.random_normal(shape = [int(shape[1]), int(Y.shape[1])], stddev = 0.01), name='weights')
#b = tf.Variable(tf.zeros(shape = [1, int(Y.shape[1])]), name='bias')
w = tf.Variable(tf.random_normal(shape=[784, 10], stddev=0.01), name="weights")
b = tf.Variable(tf.zeros([1, 10]), name="bias")
logits = tf.matmul(X,w) + b
entropy = tf.nn.softmax_cross_entropy_with_logits( logits=logits, labels=Y)
loss = tf.reduce_mean(entropy) #computes the mean over examples in the batch
optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(loss)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
n_batches = int(MNIST.train.num_examples/batch_size)
for i in range(n_epochs):
start_time = time.time()
for _ in range(n_batches):
X_batch, Y_batch = MNIST.train.next_batch(batch_size)
opt, loss_ = sess.run([optimizer, loss], feed_dict = {X: X_batch, Y:Y_batch})
end_time = time.time()
print('Epoch %d took %f'%(i, end_time - start_time))
在此代码上,执行具有MNIST数据集的逻辑回归。作者说:
在我的Mac上运行批量大小为128的模型的批量版本 以0.5秒的速度运行
然而,当我运行它时,每个纪元大约需要2秒,总执行时间约为一分钟。这个例子花了那么多时间是否合理?目前我有一台没有OC(3.0GHz)的Ryzen 1700和没有OC的GPU Gtx 1080。
答案 0 :(得分:1)
我在GTX Titan X(麦克斯韦尔)上尝试了这个代码,每个时期大约0.5秒。我希望GTX 1080应该能够得到类似的结果。
尝试使用最新的tensorflow和cuda / cudnn版本。确保没有限制(哪些GPU可见,内存张量流可以使用多少等)环境变量设置。您可以尝试运行微基准测试,以确保您可以实现卡的规定FLOPS,例如Testing GPU with tensorflow matrix multiplication