如何在张量板上实现可视化嵌入?不是MNIST数据

时间:2017-02-28 15:26:36

标签: tensorflow tensorboard

我正在尝试在Tensorboard嵌入上创建可视化图形,我使用的是csv数据,而不是MNIST数据,csv中的数据如下所示:

    0.266782506,"1,0"
    0.361942522,"0,1"
    0.862076491,"0,1" 

第一列中的数据如0.366782506是样本input_data x,“0,1”是单热标签y。而0

我试图通过在Tensorboard上嵌入投影仪来参考如何创建可视化图形,但我只通过使用MNIST数据找到了示例,所以如果有人可以提供如何的指导,我正在寻求帮助在Tensorboard上创建可视化嵌入图。 我可以使用我的代码在Tensorboard上对SCALAR,GRAPH和HISTOGRAM进行可视化,如下所示:

    # coding=utf-8
    import tensorflow as tf
    import numpy
    import os
    import csv
    import shutil
    from tensorflow.contrib.tensorboard.plugins import projector

    #Reading data from csv:
    filename = open('D:\Program Files (x86)\logistic\sample_1.csv', 'r')
    reader = csv.reader(filename)
    t_X, t_Y,c = [],[],[]
    a,b=0,0
    for i in reader:
        t_X.append(i[0])
        a= int(i[1][0])
        b= int(i[1][2])
        c= list([a,b])
        t_Y.extend([c])
    t_X = numpy.asarray(t_X)
    t_Y = numpy.asarray(t_Y)
    t_XT = numpy.transpose([t_X])
    filename.close()

    # Parameters
    learning_rate = 0.01
    training_epochs = 5
    batch_size = 50
    display_step = 1
    n_samples = t_X.shape[0]

    sess = tf.InteractiveSession()

    with tf.name_scope('Input'):
        with tf.name_scope('x_input'):
            x = tf.placeholder(tf.float32, [None, 1],name='x_input')
        with tf.name_scope('y_input'):
            y = tf.placeholder(tf.float32, [None, 2],name='y_input')

    # Weight
    with tf.name_scope('layer1'):
        with tf.name_scope('weight'):
            W = tf.Variable(tf.random_normal([1, 2],dtype=tf.float32),name='weight')
        with tf.name_scope('bias'):
            b = tf.Variable(tf.random_normal([2], dtype=tf.float32),name='bias')

    # model
    with tf.name_scope('Model'):
        with tf.name_scope('pred'):
            pred = tf.nn.softmax(tf.matmul(x, W) + b, name='pred')
        with tf.name_scope('cost'):
            cost = tf.reduce_mean(-tf.reduce_sum(y * tf.log(pred), reduction_indices=1),name='cost')
            tf.summary.scalar('cost',cost)
            tf.summary.histogram('cost',cost)
        with tf.name_scope('optimizer'):
            optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)


    # Calculate accuracy
    with tf.name_scope('accuracy_count'):
        with tf.name_scope('correct_prediction'):
            correct_prediction = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
            with tf.name_scope('accuracy'):
                accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
                tf.summary.scalar('accuracy',accuracy)
                tf.summary.histogram('accuracy', accuracy)

    init = tf.global_variables_initializer()
    merged = tf.summary.merge_all()
    sess.run(init)

    writer = tf.summary.FileWriter('D:\Tensorlogs\logs',sess.graph)


    for epoch in range(training_epochs):
        avg_cost = 0
        total_batch = int(n_samples / batch_size)

        i = 0
        for anc in range(total_batch):
            m,n = [],[]
            m = t_X[i:i+batch_size]
            n = t_Y[i:i+batch_size]
            m = numpy.asarray(m)
            n = numpy.asarray(n)
            m = numpy.transpose([m])
            summary, predr, o, c = sess.run([merged, pred, optimizer, cost],feed_dict={x: m, y: n})
            avg_cost += c / total_batch
            i = i + batch_size
        writer.add_summary(summary, epoch+1)

    if (epoch + 1) % display_step == 0:
        print("Epoch:", '%04d' % (epoch + 1), "cost=", avg_cost,"W=",wr,"b=",br,"accuracy_s=",accuracy_s.eval(feed_dict={x: t_XT, y: t_Y}))

    print("Optimization Finished!")

非常感谢!

0 个答案:

没有答案