使用自定义图层定义tensorflow / tflearn

时间:2017-01-17 19:24:33

标签: tensorflow tflearn

我想使用矩阵指定激活节点之间的连接,而不是完全连接的层。例如:

我有一个20节点层连接到10节点层。使用典型的完全连接图层,我的W矩阵为20 x 10,其中b向量的大小为10.

我的激活看起来像relu(Wx + b)

如果我有一个与W大小相同的1和0矩阵,我们称之为F,我可以在W和{{1}之间进行成对乘法运算删除我的第一层(20个节点)和第二层(10个节点)之间的连接

这是我目前的代码:

F

my_fun给我的地方:F.shape # (20, 10) import tflearn import tensorflow as tf input = tflearn.input_data(shape=[None, num_input]) first = tflearn.fully_connected(input, 20, activation='relu') # Here is where I want to use a custom function, that uses my F matrix # I dont want the second layer to be fully connected to the first, # I want only connections that are ones (and not zeros) in F # Currently: second = tflearn.fully_connected(first, 10, activation='relu') # What I want: second = tflearn.custom_layer(first, my_fun) relu( (FW)X + b)是成对乘法

如何创建此功能?我似乎无法找到关于如何完成的详细示例,但我也知道tflearn也允许基本张量流函数

2 个答案:

答案 0 :(得分:1)

很难用tflearn严格执行此操作,但如果您愿意包含基本张量流操作,那么它很简单:

F.shape
# (20, 10)
import tflearn
import tensorflow as tf

input = tflearn.input_data(shape=[None, num_input])
tf_F = tf.constant(F, shape=[20, 10])

first = tflearn.fully_connected(input, 20, activation='relu')
# Here is where I want to use a custom function, that uses my F matrix
# I want only connections that are ones (and not zeros) in F

# Old:
# second = tflearn.fully_connected(first, 10, activation='relu')
# Solution:
W = tf.Variable(tf.random_uniform([20, 10]), name='Weights')
b = tf.Variable(tf.zeros([10]), name='biases')
W_filtered = tf.mul(tf_F, W)
second = tf.matmul( W_filtered, first) + b

答案 1 :(得分:0)

我也有这个问题。我的目的是通过删除层之间的大量连接来减少模型中的参数数量。但是@kamula明显的做法并没有减少模型的大小。