如何使用自定义激活函数在张量流中构建神经网络?

时间:2019-04-08 16:09:57

标签: tensorflow neural-network

我是tensorflow的新手。我正在使用tensorflow构建一个三层神经网络(仅一个隐藏层),我想将自定义激活函数应用于其隐藏层。

我使用np库实现了它:

def my_network(input_layer,centers,beta, weights):
    layer_1 = input_layer
    gaussian = np.array([[sum([i*i for i in vec]) for vec in layer_1-center] for center in centers])
    a = beta.reshape(len(beta),1)* gaussian
    layer_2 = np.array([[np.exp(i) for i in vec] for vec in a]) 
    output = tf.matmul(np.transpose(layer_2).astype(np.float32), weights['w'])
    return output

我想将其转换为适合tensorflow及其渐变的一些代码。我该怎么办?

1 个答案:

答案 0 :(得分:0)

尝试以下用于多个卷积层的代码段:

# placeholders
X = tf.placeholder(tf.float32, [None, 28, 28, 1], name="input_X")
y = tf.placeholder(tf.float32, [None, 14, 14, 1], name="Output_y")

# C1
with tf.name_scope("layer1"):
    W1 = tf.get_variable("W1", shape=[3, 3, 1, 32],
                         initializer=tf.contrib.layers.xavier_initializer())
    b1 = tf.get_variable("b1", shape=[32], initializer=tf.contrib.layers.xavier_initializer())
    layer1 = tf.nn.conv2d(X, W1, strides=[1, 1, 1, 1], padding='SAME') + b1
    layer1_act = tf.nn.relu(layer1)  # here you can change to other activation function


# C2
with tf.name_scope("layer2"):
    W2 = tf.get_variable("W2", shape=[3, 3, 32, 64],
                         initializer=tf.contrib.layers.xavier_initializer())
    b2 = tf.get_variable("b2", shape=[64], initializer=tf.contrib.layers.xavier_initializer())
    layer2 = tf.nn.conv2d(layer1_act, W2, strides=[1, 1, 1, 1], padding='SAME') + b2
    layer2_act = tf.nn.relu(layer2)  # here you can change to other activation function


# max pool
with tf.name_scope("maxpool"):
    maxpool = tf.nn.max_pool(layer2_act, [1, 2, 2, 1], [1, 2, 2, 1], 'SAME')  #just to show how to use maxpool

# C3
with tf.name_scope("layer3"):
    W3 = tf.get_variable("W3", shape=[3, 3, 64, 32],
                         initializer=tf.contrib.layers.xavier_initializer())
    b3 = tf.get_variable("b3", shape=[32], initializer=tf.contrib.layers.xavier_initializer())
    layer3 = tf.nn.conv2d(maxpool, W3, strides=[1, 1, 1, 1], padding='SAME') + b3
    layer3_act = tf.nn.relu(layer3)  # here you can change to other activation function

#draw graph of train operation
with tf.name_scope('loss and train operation'):
    loss = tf.reduce_mean(tf.losses.mean_squared_error(
        labels=tf.cast(y, tf.int32),
        predictions=layer3_act))
    optimizer = tf.train.AdamOptimizer(learning_rate=0.00001)
    train_op = optimizer.minimize(loss)