我们如何在Tensorflow / Tflearn中获取隐藏层节点的值?

时间:2016-07-10 20:29:52

标签: tensorflow neural-network tflearn

这是tflearn中XOR的代码。我希望获得倒数第二个隐藏层节点的值(而不是权重)。我怎么能得到它?更具体地说,我希望获得下面给出的四个预测中的每一个的第2层节点(在代码中给出)的值。

import tensorflow as tf
import tflearn

X = [[0., 0.], [0., 1.], [1., 0.], [1., 1.]]  #input
Y_xor = [[0.], [1.], [1.], [0.]]  #input_labels

# Graph definition
with tf.Graph().as_default():
    tnorm = tflearn.initializations.uniform(minval=-1.0, maxval=1.0)
    net = tflearn.input_data(shape=[None, 2], name='inputLayer')
    net = tflearn.fully_connected(net, 2, activation='sigmoid', weights_init=tnorm, name='layer1')
    net = tflearn.fully_connected(net, 1, activation='softmax', weights_init=tnorm, name='layer2')
    regressor = tflearn.regression(net, optimizer='sgd', learning_rate=2., loss='mean_square', name='layer3')

    # Training
    m = tflearn.DNN(regressor)
    m.fit(X, Y_xor, n_epoch=100, snapshot_epoch=False) 

    # Testing
    print("Testing XOR operator")
    print("0 xor 0:", m.predict([[0., 0.]]))
    print("0 xor 1:", m.predict([[0., 1.]]))
    print("1 xor 0:", m.predict([[1., 0.]]))
    print("1 xor 1:", m.predict([[1., 1.]]))

    layer1_var = tflearn.variables.get_layer_variables_by_name('layer1')
    layer2_var = tflearn.variables.get_layer_variables_by_name('layer2')
    inputLayer_var = tflearn.variables.get_layer_variables_by_name('inputLayer')

    #result = tf.matmul(inputLayer_var, layer1_var[0]) + layer1_var[1]

    with m.session.as_default():
        print(tflearn.variables.get_value(layer1_var[0]))   #layer1 weights
        print(tflearn.variables.get_value(layer1_var[1]))   #layer1 bias
        print(tflearn.variables.get_value(layer2_var[0]))   #layer2 weights
        print(tflearn.variables.get_value(layer2_var[1]))   #layer2 bias

2 个答案:

答案 0 :(得分:4)

您可以重复使用共享同一会话的新模型(使用相同的权重): 。请注意,您还可以保存“m”模型并使用“m2”加载它,这会产生类似的结果。

import tensorflow as tf
import tflearn

X = [[0., 0.], [0., 1.], [1., 0.], [1., 1.]]
Y_xor = [[0.], [1.], [1.], [0.]]

# Graph definition
with tf.Graph().as_default():
    tnorm = tflearn.initializations.uniform(minval=-1.0, maxval=1.0)
    net = tflearn.input_data(shape=[None, 2], name='inputLayer')
    layer1 = tflearn.fully_connected(net, 2, activation='sigmoid', weights_init=tnorm, name='layer1')
    layer2 = tflearn.fully_connected(layer1, 1, activation='softmax', weights_init=tnorm, name='layer2')
    regressor = tflearn.regression(layer2, optimizer='sgd', learning_rate=2., loss='mean_square', name='layer3')

    # Training
    m = tflearn.DNN(regressor)
    m.fit(X, Y_xor, n_epoch=100, snapshot_epoch=False) 

    # Testing
    print("Testing XOR operator")
    print("0 xor 0:", m.predict([[0., 0.]]))
    print("0 xor 1:", m.predict([[0., 1.]]))
    print("1 xor 0:", m.predict([[1., 0.]]))
    print("1 xor 1:", m.predict([[1., 1.]]))

    # You can create a new model, that share the same session (to get same weights)
    # Or you can also simply save and load a model
    m2 = tflearn.DNN(layer1, session=m.session)
    print(m2.predict([[0., 0.]]))

答案 1 :(得分:0)

这可能无法直接回答您的问题,但是如果您使用tflearn,获取每一层的权重就很简单,

Input #0, rtp, from 'rtp://127.0.0.1:1234':
  Duration: N/A, start: 0.000000, bitrate: 64 kb/s
    Stream #0:0: Audio: pcm_mulaw, 8000 Hz, mono, s16, 64 kb/s
Stream mapping:
  Stream #0:0 -> #0:0 (pcm_mulaw (native) -> pcm_mulaw (native))
Press [q] to stop, [?] for help
Output #0, mulaw, to 'tcp://127.0.0.1:5555':
  Metadata:
    encoder         : Lavf58.29.100
    Stream #0:0: Audio: pcm_mulaw, 8000 Hz, mono, s16, 64 kb/s
    Metadata:
      encoder         : Lavc58.54.100 pcm_mulaw

只需记住一件事,不在批处理规范化或单独激活之后,将weigt提取代码放在层之后