Tf.Print()不打印张量的形状?

时间:2018-04-11 15:06:56

标签: tensorflow machine-learning deep-learning classification python-3.5

我使用Tensorflow编写了一个简单的分类程序并获取输出,除了我试图打印模型参数,特征和功能的张量形状。偏压。 功能定义:

import tensorflow as tf, numpy as np
from tensorflow.examples.tutorials.mnist import input_data


def get_weights(n_features, n_labels):
#    Return weights
    return tf.Variable( tf.truncated_normal((n_features, n_labels)) )

def get_biases(n_labels):
    # Return biases
    return tf.Variable( tf.zeros(n_labels))

def linear(input, w, b):
    #  Linear Function (xW + b)
#     return np.dot(input,w) + b 
    return tf.add(tf.matmul(input,w), b)

def mnist_features_labels(n_labels):
    """Gets the first <n> labels from the MNIST dataset
    """
    mnist_features = []
    mnist_labels = []
    mnist = input_data.read_data_sets('dataset/mnist', one_hot=True)

    # In order to make quizzes run faster, we're only looking at 10000 images
    for mnist_feature, mnist_label in zip(*mnist.train.next_batch(10000)):

        # Add features and labels if it's for the first <n>th labels
        if mnist_label[:n_labels].any():
            mnist_features.append(mnist_feature)
            mnist_labels.append(mnist_label[:n_labels])

    return mnist_features, mnist_labels

图表创建:

# Number of features (28*28 image is 784 features)
n_features = 784
# Number of labels
n_labels = 3

# Features and Labels
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)

# Weights and Biases
w = get_weights(n_features, n_labels)
b = get_biases(n_labels)

# Linear Function xW + b
logits = linear(features, w, b)

# Training data
train_features, train_labels = mnist_features_labels(n_labels)

print("Total {0} data points of Training Data, each having {1} features \n \
      Total {2} number of labels,each having 1-hot encoding {3}".format(len(train_features),len(train_features[0]),\
                                                                     len(train_labels),train_labels[0]
                                                                      )
     )

# global variables initialiser
init= tf.global_variables_initializer()

with tf.Session() as session:

    session.run(init)

问题在于:

#            shapes =tf.Print ( tf.shape(features), [tf.shape(features),
#                                                     tf.shape(labels),
#                                                     tf.shape(w),
#                                                     tf.shape(b),
#                                                     tf.shape(logits)
#                                                     ], message= "The shapes are:" )
#         print("Verify shapes",shapes)
    logits = tf.Print(logits, [tf.shape(features),
                           tf.shape(labels),
                           tf.shape(w),
                           tf.shape(b),
                           tf.shape(logits)],
                  message= "The shapes are:")
    print(logits)

我看了here,但没有找到有用的东西。

    # Softmax
    prediction = tf.nn.softmax(logits)

    # Cross entropy
    # This quantifies how far off the predictions were.
    # You'll learn more about this in future lessons.
    cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)

    # Training loss
    # You'll learn more about this in future lessons.
    loss = tf.reduce_mean(cross_entropy)

    # Rate at which the weights are changed
    # You'll learn more about this in future lessons.
    learning_rate = 0.08

    # Gradient Descent
    # This is the method used to train the model
    # You'll learn more about this in future lessons.
    optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)

    # Run optimizer and get loss
    _, l = session.run(
        [optimizer, loss],
        feed_dict={features: train_features, labels: train_labels})

# Print loss
print('Loss: {}'.format(l))

我得到的输出是:

Extracting dataset/mnist/train-images-idx3-ubyte.gz
Extracting dataset/mnist/train-labels-idx1-ubyte.gz
Extracting dataset/mnist/t10k-images-idx3-ubyte.gz
Extracting dataset/mnist/t10k-labels-idx1-ubyte.gz
Total 3118 data points of Training Data, each having 784 features 
       Total 3118 number of labels,each having 1-hot encoding [0. 1. 0.]
Tensor("Print_22:0", shape=(?, 3), dtype=float32)
Loss: 5.339271068572998

任何人都可以帮助我理解,为什么我无法看到张量的形状?

2 个答案:

答案 0 :(得分:6)

这不是您使用tf.Print的方式。它是一个单独执行任务的操作(只返回输入),但打印请求的张量作为副作用。你应该做点什么

logits = tf.Print(logits, [tf.shape(features),
                           tf.shape(labels),
                           tf.shape(w),
                           tf.shape(b),
                           tf.shape(logits)],
                  message= "The shapes are:")

现在,每当评估logits时(因为它将用于计算损耗/梯度),将打印形状信息。

你现在正在做的只是打印tf.Print op的返回值,这只是它的输入(tf.shape(features))。

答案 1 :(得分:0)

在@ xdurch0建议之后,我尝试了这个

shapes = tf.Print(logits, [tf.shape(features),
                       tf.shape(labels),
                       tf.shape(w),
                       tf.shape(b),
                       tf.shape(logits)],
              message= "The shapes are:")
# Run optimizer and get loss
_, l, resultingShapes = session.run( [optimizer, loss, shapes],
                                     feed_dict={features: train_features, labels: train_labels})
print('The shapes are: '. resultingShapes.shape)

并且部分工作,

Extracting dataset/mnist/train-images-idx3-ubyte.gz
Extracting dataset/mnist/train-labels-idx1-ubyte.gz
Extracting dataset/mnist/t10k-images-idx3-ubyte.gz
Extracting dataset/mnist/t10k-labels-idx1-ubyte.gz
Total 3118 data points of Training Data, each having 784 features 
       Total 3118 number of labels, each having 1-hot encoding [0. 1. 0.]
The shapes are:  (3118, 3)

Loss: 10.223002433776855

可以@ xdurch0提出一些可以获得预期结果的内容吗?

我希望的结果是:

tf.shape(features):( 3118,784)tf.shape(labels):( 3118,3),

tf.shape(w):( 784,3),tf.shape(b):( 3,1),tf.shape(logits):( 3118,3)