您如何在机器学习中可视化或绘制实时培训/测试数据?

时间:2018-01-25 09:50:36

标签: python matplotlib machine-learning visualization data-visualization

总之,我希望能够在实时学习时可视化列车/测试数据

  下面是我目前可视化进展的方式:

batch_size = 100
epochs = 30
init = tf.global_variables_initializer()
samples = []

with tf.Session() as sess:    
    sess.run(init)    
        for epoch in range(epochs):        
        num_batches = mnist.train.num_examples // batch_size

        for i in range(num_batches):            
            batch = mnist.train.next_batch(batch_size)            
            batch_images = batch[0].reshape((batch_size, 784))
            batch_images = batch_images * 2 -1            
            batch_z = np.random.uniform(-1,1,size=(batch_size, 100))

            _ = sess.run(D_trainer, feed_dict={real_images:batch_images,    
                                               z:batch_z})
            _ = sess.run(G_trainer, feed_dict={z:batch_z})
        print("ON EPOCH {}".format(epoch))

        sample_z = np.random.uniform(-1,1, size=(1, 100))
        gen_samples = sess.run(generator(z, reuse=True),
                               feed_dict={z:sample_z})    
        samples.append(gen_samples)

new_samples = []
#saver = tf.train.Saver(var_list=g_vars)

with tf.Session() as sess:   
    #saver.restore(sess,"...")

    for x in range(5):
        sample_z = np.random.uniform(-1,1, size=(1, 100))
        gen_samples = sess.run(generator(z, reuse=True),
                               feed_dict={z:sample_z})        
        new_samples.append(gen_samples)       
plt.imshow(new_samples[0].reshape(28,28))
     

这就是我通过实时情感分析实时图形化图形的方法   在另一个终端上运行它。

import matplotlib.pyplot as plt
import matplotlib.animation as animation
from matplotlib import style
import time

style.use("ggplot")

fig = plt.figure()
ax1 = fig.add_subplot(1,1,1)

def animate(i):
    pullData = open("twitter-out.txt","r").read()
    lines = pullData.split('\n')

    xar = []
    yar = []

    x = 0
    y = 0

    for l in lines[-200:]:
        x += 1
        if "pos" in l:
            y += 1
        elif "neg" in l:
            y -= 1

        xar.append(x)
        yar.append(y)

    ax1.clear()
    ax1.plot(xar,yar)
ani = animation.FuncAnimation(fig, animate, interval=1000)
plt.show()

我还附上了一个Youtube链接,以进一步明确我遇到的问题。我希望能够在训练时看到和听到图像/语音。

Starts at 1:10:23 - 1:11:03

Generating Real-time RNN-LSTM

1 个答案:

答案 0 :(得分:0)

如果您对训练曲线感兴趣,请查看以下帖子:Keras + TensorFlow Realtime training chart(其中我推荐我的包livelossplot)。

从我的Starting deep learning hands-on: image classification on CIFAR-10教程中,我坚持要跟踪两者:

  • 全球指标(对数损失,准确度),
  • 示例(正确和错误地对案例进行分类)。

后者可以帮助我们了解哪种模式存在问题,并且在很多场合帮助我改变网络(或补充训练数据,如果是这样的话)。

示例如何工作(此处为Neptune,但您可以在Jupyter Notebook中手动执行,或使用TensorBoard图片频道):

Misclassified images by neural network - Neptune

然后查看具体的例子,以及预测的概率:

enter image description here

完整免责声明:我与deepsense.ai,创作者或Neptune - Machine Learning Lab合作。