张量板图的含义是什么

时间:2018-11-13 17:37:56

标签: python tensorflow

我遵循一些教程来尝试可视化我的网络(word2vec)并获得培训日志。幸运的是,我使它工作并获得了一些图表。我认为结果不错,因为损失图看起来像这样。loss diagram 但是我不明白其他图表的含义。 如何理解下图?

ps。代码在这里

graph = tf.Graph()
with graph.as_default():
    # Input data.
    train_inputs = tf.placeholder(tf.int32, shape=[batch_size],name='train_inputs')
    train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1],name='train_labels')
    valid_dataset = tf.constant(valid_examples, dtype=tf.int32)

    # Ops and variables pinned to the CPU because of missing GPU implementation
    with tf.device('/cpu:0'):
        # Look up embeddings for inputs.
        with tf.name_scope('Embeddings'):
            embeddings = tf.Variable(tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0),name='Embeddings')
            embed = tf.nn.embedding_lookup(embeddings, train_inputs)
            tf.summary.histogram(name ='Embeddings', values = embeddings)

        # Construct the variables for the NCE loss
        with tf.name_scope('Weights'):
            nce_weights = tf.Variable(tf.truncated_normal([vocabulary_size, embedding_size],stddev=1.0 / math.sqrt(embedding_size)),name='Weights')
            tf.summary.histogram(name ='Weights', values = nce_weights)
        with tf.name_scope('Biases'):
            nce_biases = tf.Variable(tf.zeros([vocabulary_size]),dtype=tf.float32,name='Biases')
            tf.summary.histogram(name ='Biases', values = nce_biases)

    # Compute the average NCE loss for the batch.
    # tf.nce_loss automatically draws a new sample of the negative labels each
    # time we evaluate the loss.
    with tf.name_scope('Loss'):
        output_layer = tf.nn.nce_loss(weights=nce_weights,biases=nce_biases,inputs=embed,labels=train_labels,num_sampled=num_sampled,num_classes=vocabulary_size)
        loss = tf.reduce_mean(output_layer)
        tf.summary.scalar('loss', loss)
    # Construct the SGD optimizer using a learning rate of 1.0.
    with tf.name_scope('Train'):
        optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss)
        merged = tf.summary.merge_all()
        #train_set = [optimizer, loss]
        train_set = [optimizer, merged]
    # Compute the cosine similarity between minibatch examples and all embeddings.
    norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
    normalized_embeddings = embeddings / norm
    valid_embeddings = tf.nn.embedding_lookup(normalized_embeddings, valid_dataset)
    similarity = tf.nn.softmax(tf.matmul(valid_embeddings, normalized_embeddings, transpose_b=True))

    # Add variable initializer.

    init = tf.global_variables_initializer()

0 个答案:

没有答案