在tflearn中可视化CNN层或池层

时间:2018-03-27 15:13:38

标签: python visualization tflearn

有没有什么方法可以在培训甚至测试tflearn时可视化CNN或汇集层的输出?我已经看到了tensorflow的可视化代码,但由于session和feeddict涉及到他们并且我一直得到像“unhashable numpy.ndarray”这样的错误,但是我的图像尺寸是相同的所以我决定询问是否有一种方法可以将输出可视化任何一层。下面是我的tflearn图层代码: -

X_train, X_test, y_train, y_test=cross_validation.train_test_split(data,labels,test_size=0.1)

    tf.reset_default_graph()
    convnet=input_data(shape=[None,50,50,3],name='input')
    convnet=conv_2d(convnet,32,5,activation='relu')
    convnet=max_pool_2d(convnet,5)
    convnet=conv_2d(convnet,64,5,activation='relu')
    convnet=max_pool_2d(convnet,5)

    convnet=conv_2d(convnet,32,5,activation='relu')
    convnet=max_pool_2d(convnet,5)

    convnet=fully_connected(convnet,128,activation='relu')
    convnet=dropout(convnet,0.4)
    convnet=fully_connected(convnet,6,activation='softmax')
    convnet=regression(convnet,optimizer='adam',learning_rate=0.005,loss='categorical_crossentropy',name='MyClassifier')
    model=tflearn.DNN(convnet,tensorboard_dir='log',tensorboard_verbose=0)
    model.fit(X_train,y_train, n_epoch=20,validation_set=(X_test,y_test), snapshot_step=20,show_metric=True,run_id='MyClassifier')
    print("Saving the model")
    model.save('model.tflearn')

如果在培训或测试中无论如何都可以显示来自任何层的输出?输出是指失真图像检测边缘或其他低级特征。谢谢。

1 个答案:

答案 0 :(得分:1)

如此处所述,您可以通过简单地定义一个将观察到的图层作为输出的新模型来查看中间层产生的输出。 首先,声明原始模型(但保留对要观察的中间层的引用): convnet = input_data(shape = [None,50,50,3],name ='input') convnet = conv_2d(convnet,32,5,activation ='relu') max_0 = max_pool_2d(convnet,5) convnet = conv_2d(max_0,64,5,activation ='relu') max_1 = max_pool_2d(convnet,5) ... convnet =回归(...) model = tflearn.DNN(...) model.fit(...) 现在只需为每个图层创建一个模型并预测输入数据: observe = [max_0,max_1,max_2] 观察者=观察到v的[tflearn.DNN(v,session = model.session)] 输出= [观察员中m的m.predict(X_test)] print([输出中d的d.shape]) 其中为您的模型输出以下评估的张量形状:   [(2,10,10,32),(2,2,2,64),(2,1,1,32)] 有了这个,您将能够在测试期间查看输出。至于培训,也许你可以使用回调? class PlottingCallback(tflearn.callbacks.Callback):     def __init __(self,model,x,                  layers_to_observe =(),                  内核= 10,                  输入= 1):         self.model = model         self.x = x         self.kernels =内核         self.inputs =输入         self.observers = [tflearn.DNN(l)for l in layers_to_observe]     def on_epoch_end(self,training_state):         在self.observers中输出= [o.predict(self.x)for o]         我在范围内(self.inputs):             plt.figure(frameon =假)             plt.subplots_adjust(wspace = 0.1,hspace = 0.1)             ix = 1             输出中的o:                 对于范围内的内核(self.kernels):                     plt.subplot(len(输出),self.kernels,ix)                     plt.imshow(o [i,:,:,kernel])                     plt.axis( 'OFF')                     ix + = 1             plt.savefig( '输出换图像:%的i-在历元:%i.png'                         %(i,training_state.epoch)) model.fit(X_train,y_train,           ...           callbacks = [PlottingCallback(model,X_test,(max_0,max_1,max_2))]) 这将在每个时期保存磁盘上与此类似的图像: