我正在尝试将这个tensorflow代码重新实现到keras中,我注意到这里提交的其他票证与我要重新创建的观点不相同。目标是在多个密集层之间共享权重矩阵。
import tensorflow as tf
# define input and weight matrices
x = tf.placeholder(shape=[None, 4], dtype=tf.float32)
w1 = tf.Variable(tf.truncated_normal(stddev=.1, shape=[4, 12]),
dtype=tf.float32)
w2 = tf.Variable(tf.truncated_normal(stddev=.1, shape=[12, 2]),
dtype=tf.float32)
# neural network
hidden_1 = tf.nn.tanh(tf.matmul(x, w1))
projection = tf.matmul(hidden_1, w2)
hidden_2 = tf.nn.tanh(projection)
hidden_3 = tf.nn.tanh(tf.matmul(hidden_2, tf.transpose(w2)))
y = tf.matmul(hidden_3, tf.transpose(w1))
# loss function and optimizer
loss = tf.reduce_mean(tf.reduce_sum((x - y) * (x - y), 1))
optimize = tf.train.AdamOptimizer().minimize(loss)
init = tf.initialize_all_variables()
问题是在喀拉拉邦重新实现这些权重层,作为原始层的转置。我目前正在使用keras功能API来实现自己的网络
答案 0 :(得分:2)
首先定义两个密集层:
from keras.layers import Dense, Lambda
import keras.backend as K
dense1 = Dense(12, use_bias=False, activation='tanh')
dense2 = Dense(2, use_bias=False, activation='tanh')
然后,您可以使用例如dense1.weights[0]
从图层访问权重。您可以将其包装在lambda层中,该层也可以转移权重:
h3 = Lambda(lambda x: K.dot(x, K.transpose(dense2.weights[0])))(h2)