关于在张量流中回归的神经网络成本函数中加入正则化

时间:2018-02-18 06:52:43

标签: tensorflow neural-network linear-regression regularized

我正在尝试为线性攻击建立一个神经网络。我想将正则化部分添加到成本函数中,但每次迭代后成本不会改变。代码如下:

WebDriverWait

通过epoch运行脚本后,输出为

X = tf.placeholder(tf.float32,[n_x, None], name = "x")
Y = tf.placeholder(tf.float32,[n_y, None], name = "y")
W1 = tf.get_variable("W1", [25,11], initializer = tf.contrib.layers.xavier_initializer())
b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())
W2 = tf.get_variable("W2", [25,25], initializer = tf.contrib.layers.xavier_initializer())
b2 = tf.get_variable("b2", [25,1], initializer = tf.zeros_initializer())
W3 = tf.get_variable("W3", [25,25], initializer = tf.contrib.layers.xavier_initializer())
b3 = tf.get_variable("b3", [25,1], initializer = tf.zeros_initializer())
W4 = tf.get_variable("W4", [25,25], initializer = tf.contrib.layers.xavier_initializer())
b4 = tf.get_variable("b4", [25,1], initializer = tf.zeros_initializer())
W5 = tf.get_variable("W5", [12,25], initializer = tf.contrib.layers.xavier_initializer())
b5 = tf.get_variable("b5", [12,1], initializer = tf.zeros_initializer())
W6 = tf.get_variable("W6", [1,12], initializer = tf.contrib.layers.xavier_initializer())
b6 = tf.get_variable("b6", [1,1], initializer = tf.zeros_initializer())
Z1 = tf.add(tf.matmul(W1,X),b1)                                            
A1 = tf.nn.relu(Z1)                                            
Z2 = tf.add(tf.matmul(W2,A1),b2)                                              

A2 = tf.nn.relu(Z2)                                             
Z3 = tf.add(tf.matmul(W3,A2),b3)                                              

A3 = tf.nn.relu(Z3)                                          
Z4 = tf.add(tf.matmul(W4,A3),b4)                                              

A4 = tf.nn.relu(Z4)                                              
Z5 = tf.add(tf.matmul(W5,A4),b5)                                              

A5 = tf.nn.relu(Z5)
Z6 = tf.add(tf.matmul(W6,A5),b6)
A6 = tf.nn.tanh(Z6)      
regulaizers = tf.nn.l2_loss(W1) + tf.nn.l2_loss(W2) + tf.nn.l2_loss(W3) + tf.nn.l2_loss(W4) + tf.nn.l2_loss(W5) + tf.nn.l2_loss(W6)
beta=0.01
cost = (1/(2*m))*tf.reduce_sum(tf.pow(A6-Y, 2) ) + beta * regulaizers
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost)
epoch=1500
init = tf.global_variables_initializer()


with tf.Session() as sess:


      sess.run(init)


      for epoch in range(num_epochs):

          epoch_cost = 0.                     
          num_minibatches = int(m / minibatch_size) 
          minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)

          for minibatch in minibatches:


              (minibatch_X, minibatch_Y) = minibatch


              _ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})


             epoch_cost += minibatch_cost / num_minibatches

初始化参数并提供会话后,您可以看到成本不会改变。我想知道我是否能得到一些帮助以及成本函数是否正确。

0 个答案:

没有答案