使用keras中的visualize_activation时,如何保存打印在屏幕上的激活损失?

时间:2019-06-04 12:50:11

标签: python keras callback

我正在研究可视化卷积神经网络中密集层的激活。我正在使用activation_maximization.py文件中的visualize_activation函数。以下是源代码:

here

当我调用该函数时,它会返回一个图像,该图像将在密集层的输出处激活特定的类。它会在每次迭代时打印出损耗值。我想保存此损失和迭代次数,以便可以绘制损失历史记录。

from vis.utils import utils #utility to find layers
from keras import activations #import activations package from keras 

layer_idx = utils.find_layer_idx(model, 'dense_4') #We want to visualize the final dense layer. 

model.layers[layer_idx].activation = activations.linear  #change activation of final layer from sigmoid to linear

model = utils.apply_modifications(model) #apply that modification to the model

from vis.visualization import visualize_activation

img = visualize_activation(model, layer_idx, filter_indices=0, max_iter=20, callbacks=[],verbose=True)


Iteration: 1, named_losses: [('ActivationMax Loss', 905.0986),
 ('L-6.0 Norm Loss', 0.5103914),
 ('TV(2.0) Loss', 6269.6377)], overall loss: 7175.24658203125
Iteration: 2, named_losses: [('ActivationMax Loss', 587.5296),
 ('L-6.0 Norm Loss', 0.505722),
 ('TV(2.0) Loss', 3216.1995)], overall loss: 3804.23486328125
Iteration: 3, named_losses: [('ActivationMax Loss', 490.57422),
 ('L-6.0 Norm Loss', 0.5032846),
 ('TV(2.0) Loss', 1831.6558)], overall loss: 2322.7333984375
Iteration: 4, named_losses: [('ActivationMax Loss', 373.54468),
 ('L-6.0 Norm Loss', 0.50211877),
 ('TV(2.0) Loss', 1174.0686)], overall loss: 1548.1153564453125
Iteration: 5, named_losses: [('ActivationMax Loss', 270.78586),
 ('L-6.0 Norm Loss', 0.5014095),
 ('TV(2.0) Loss', 776.7929)], overall loss: 1048.0802001953125
Iteration: 6, named_losses: [('ActivationMax Loss', 194.22995),
 ('L-6.0 Norm Loss', 0.5009613),
 ('TV(2.0) Loss', 523.78174)], overall loss: 718.5126342773438
Iteration: 7, named_losses: [('ActivationMax Loss', 142.9722),
 ('L-6.0 Norm Loss', 0.5007244),*emphasized text*

0 个答案:

没有答案