TensorFlow中简单前馈NN的GPU训练的高效示例实现?也许与tf.data?

时间:2019-01-18 12:47:45

标签: python python-3.x tensorflow

我刚刚开始使用GPU版本的TensorFlow,希望它可以加快对我的前馈神经网络的训练。我可以在我的GPU(GTX1080ti)上进行训练,但是遗憾的是,它比我目前实现它的方法在CPU(i7-8700K)上进行相同的训练明显要快。在训练期间,GPU似乎几乎没有被利用,这使我怀疑实现的瓶颈在于如何使用feed_dict将数据从主机复制到设备。

我听说TensorFlow有一个称为“ tf.data”的管道,该管道应该可以更轻松,更快速地将数据馈送到GPU等。但是,我无法找到任何简单的示例来说明这一概念。实施了多层感知器训练,以代替feed_dict。

有人知道这样的例子吗?最好尽可能简单,因为我是TensorFlow的新手。还是我应该在当前的实现中进行其他更改以提高其效率?我要粘贴此处的代码:

import tensorflow as tf
import numpy as np
from sklearn import datasets
from sklearn.model_selection import train_test_split
tf.reset_default_graph()   
import time

# Function for iris dataset.
def get_iris_data():
    iris   = datasets.load_iris()
    data   = iris["data"]
    target = iris["target"]

    # Convert to one-hot vectors
    num_labels = len(np.unique(target))
    all_Y = np.eye(num_labels)[target]
    return train_test_split(data, all_Y, test_size=0.33, random_state=89)
# Function which initializes tensorflow weights & biases for feed-forward NN.
def InitWeights(LayerSizes):
    with tf.device('/gpu:0'):
        # Make tf placeholders for network inputs and outputs.
        X = tf.placeholder( shape = (None,LayerSizes[0]),
                            dtype = tf.float32,
                            name ='InputData')
        y = tf.placeholder( shape = (None,LayerSizes[-1]),
                            dtype = tf.float32,
                            name ='OutputData')
        # Initialize weights and biases.
        W = {}; b = {};
        for ii in range(len(LayerSizes)-1):
            layername = f'layer%s' % ii
            with tf.variable_scope(layername):
                ny = LayerSizes[ii]
                nx = LayerSizes[ii+1]
                # Weights (initialized with xavier initializatiion).
                W['Weights_'+layername] = tf.get_variable(
                                    name = 'Weights_'+layername,
                                    shape = (ny, nx),
                                    initializer = tf.contrib.layers.xavier_initializer(),
                                    dtype = tf.float32
                                    )
                # Bias (initialized with xavier initializatiion).
                b['Bias_'+layername] = tf.get_variable(
                                    name = 'Bias_'+layername,
                                    shape = (nx),
                                    initializer = tf.contrib.layers.xavier_initializer(),
                                    dtype = tf.float32
                                    )
    return W, b, X, y
# Function for forward propagation of NN.
def FeedForward(X, W, b):    
    with tf.device('/gpu:0'):
        # Initialize 'a' of first layer to the placeholder of the network input.
        a = X
        # Loop all layers of the network.
        for ii in range(len(W)):
            # Use name of each layer as index.
            layername = f'layer%s' % ii
            ## Weighted sum: z = input*W + b
            z = tf.add(tf.matmul(a, W['Weights_'+layername], name = 'WeightedSum_z_'+layername), b['Bias_'+layername])
            ## Passed through actication fcn: a = h(z)
            if ii == len(W)-1:
                a = z
            else:
                a = tf.nn.relu(z, name = 'activation_a_'+layername)
    return a

if __name__ == "__main__":
    # Import data
    train_X, test_X, train_y, test_y = get_iris_data()
    # Define network size [ninputs-by-256-by-outputs]
    LayerSizes = [4, 256, 3]
    # Initialize weights and biases.
    W, b, X, y  = InitWeights(LayerSizes)

    # Define loss function to optimize.
    yhat = FeedForward(X, W, b)
    loss = tf.reduce_sum(tf.square(y - yhat),reduction_indices=[0])

    # Define optimizer to use when minimizing loss function.
    all_variables = tf.trainable_variables()
    optimizer     = tf.train.GradientDescentOptimizer(learning_rate = 0.0001)
    train_op      = optimizer.minimize(loss, var_list = all_variables)

    # Start tf session and initialize variables.
    sess = tf.Session()
    sess.run(tf.global_variables_initializer())

    # Train 10000 minibatches and time how long it takes.   
    t0 = time.time()
    for i in range(10000):
        ObservationsToUse = np.random.choice(len(train_X), 32)
        X_minibatch = train_X[ObservationsToUse,:]
        y_minibatch = train_y[ObservationsToUse,:]
        sess.run(train_op, feed_dict={X : X_minibatch, y : y_minibatch})
    t1 = time.time()

    print('Training took %0.2f seconds' %(t1-t0)) 
    sess.close()

1 个答案:

答案 0 :(得分:0)

速度可能较低,原因是:

  • 您正在创建占位符。使用numpy,我们将数据插入到 占位符,从而将它们转换为图的张量。

通过使用tf.data.Dataset,您可以创建直接管道,从而使数据直接流到图形中而无需占位符。它们快速,可扩展,并具有许多功能。

    with np.load("/var/data/training_data.npy") as data:
  features = data["features"]
  labels = data["labels"]
    # Assume that each row of `features` corresponds to the same row as `labels`.
    assert features.shape[0] == labels.shape[0]
    dataset = tf.data.Dataset.from_tensor_slices((features, labels))

一些有用的功能:

dataset = dataset.shuffle(buffer_size=10000)
dataset = dataset.batch(32) # Creating batches
dataset = dataset.repeat(num_epochs) # repeat the dataset 'N' times
iterator = dataset.make_one_shot_iterator() # Create a iterator to retrieve batches of data

X, Y = iterator.get_next()

此处,批量大小为32。 就您而言,

dataset = tf.data.Dataset.from_tensor_slices((data, targets))

因此,不需要占位符。直接运行,

session.run( train_op ) # no feed_dict!!