如何在使用python3编写的程序中循环隐藏层?

时间:2016-10-30 22:54:56

标签: python-3.x loops machine-learning deep-learning

我编写了代码来执行深层学习的隐藏层方法。 每个隐藏层都会分析输入数据并将其传递给其他隐藏层,直到出现分析数据为止。

可以根据需要制作尽可能多的隐藏图层。但是,如果我想制作50个隐藏层,那将花费很长时间和精力。因此,我想到了使用循环函数以节省时间和精力。然而,由于我是编程新手,因此很难。

这是程序:

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)

n_nodes_hl1 = 500
n_nodes_hl2 = 500
n_nodes_hl3 = 500

n_classes = 10
batch_size = 100

# height * width

x = tf.placeholder('float',[None, 784])
y = tf.placeholder('float')

def neural_network_model(data):

   # (input_data * wehights) + biases

   hidden_1_layer = {'weight' :tf.Variable(tf.random_normal([784, n_nodes_hl1])),
                     'biases' :tf.Variable(tf.random_normal([n_nodes_hl1]))}

   hidden_2_layer = {'weight' :tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])),
                     'biases' :tf.Variable(tf.random_normal([n_nodes_hl2]))}

   hidden_3_layer = {'weight' :tf.Variable(tf.random_normal([n_nodes_hl2, n_nodes_hl3])),
                     'biases' :tf.Variable(tf.random_normal([n_nodes_hl3]))}

   output_layer = {'weight' :tf.Variable(tf.random_normal([n_nodes_hl3, n_classes])),
                     'biases' :tf.Variable(tf.random_normal([n_classes]))}

   #    # (input_data * wehights) + biases

   l1 = tf.add(tf.matmul(data, hidden_1_layer['weight']), hidden_1_layer['biases'])
   l1 = tf.nn.relu(l1)

   l2 = tf.add(tf.matmul(l1, hidden_2_layer['weight']), hidden_2_layer['biases'])
   l2 = tf.nn.relu(l2)

   l3 = tf.add(tf.matmul(l2, hidden_3_layer['weight']), hidden_3_layer['biases'])
   l3 = tf.nn.relu(l3)

   output = tf.matmul(l3, output_layer['weight']) + output_layer['biases']

   return output

def train_neural_network(x):
   prediction = neural_network_model(x)
   cost = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(prediction,y) )
   optimizer = tf.train.AdamOptimizer().minimize(cost)

   hm_epochs = 20

   with tf.Session() as sess:
      sess.run(tf.initialize_all_variables())

      for epoch in range(hm_epochs):
      epoch_loss = 0
      for _ in range(int(mnist.train.num_examples/batch_size)):
         epoch_x, epoch_y = mnist.train.next_batch(batch_size)
         _, c = sess.run([optimizer, cost], feed_dict = {x: epoch_x, y: epoch_y})
         epoch_loss += c
      print('Epoch', epoch, 'completed out of', hm_epochs, 'loss:', epoch_loss)

      correct = tf.equal(tf.argmax(prediction,1), tf.argmax(y,1))
      accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
      print('accuracy:', accuracy.eval({x:mnist.test.images, y:mnist.test.labels}))

train_neural_network(x)

这是我需要循环的部分:

def neural_network_model(data):

   # (input_data * wehights) + biases

   hidden_1_layer = {'weight' :tf.Variable(tf.random_normal([784, n_nodes_hl1])),
                     'biases' :tf.Variable(tf.random_normal([n_nodes_hl1]))}

   hidden_2_layer = {'weight' :tf.Variable(tf.random_normal([n_nodes_hl1, n_nodes_hl2])),
                     'biases' :tf.Variable(tf.random_normal([n_nodes_hl2]))}

   hidden_3_layer = {'weight' :tf.Variable(tf.random_normal([n_nodes_hl2, n_nodes_hl3])),
                     'biases' :tf.Variable(tf.random_normal([n_nodes_hl3]))}

   output_layer = {'weight' :tf.Variable(tf.random_normal([n_nodes_hl3, n_classes])),
                     'biases' :tf.Variable(tf.random_normal([n_classes]))}

   #    # (input_data * wehights) + biases

   l1 = tf.add(tf.matmul(data, hidden_1_layer['weight']), hidden_1_layer['biases'])
   l1 = tf.nn.relu(l1)

   l2 = tf.add(tf.matmul(l1, hidden_2_layer['weight']), hidden_2_layer['biases'])
   l2 = tf.nn.relu(l2)

   l3 = tf.add(tf.matmul(l2, hidden_3_layer['weight']), hidden_3_layer['biases'])
   l3 = tf.nn.relu(l3)

   output = tf.matmul(l3, output_layer['weight']) + output_layer['biases']

   return output

1 个答案:

答案 0 :(得分:1)

我想你想要一个像下面这样的方法来创建一个隐藏层:

def make_hidden(input_num, hidden_num):
  return {'weight' :tf.Variable(tf.random_normal([input_num, 
                                                  hidden_num])),
          'biases' :tf.Variable(tf.random_normal([hidden_num]))}

可以像创建输出图层一样创建输出图层。

def make_output(hidden_num, output_classes):
  return {'weight' :tf.Variable(tf.random_normal([hidden_num, 
                                                  n_classes])),
          'biases' :tf.Variable(tf.random_normal([n_classes]))}

然后你有一个列表,存储每层的节点数,从输入层开始,到最后一个隐藏层结束:

n_nodes = [0, 784, 500, 500, 500]
     #     |___ dummy value so that n_nodes[i] and n_nodes[i+1] stores
     #          the input and hidden number of the i-th hidden layer
     #          (1-based) because layers[0] is the input.

然后您的neural_network_model可以简化:

def neural_network_model(data, n_nodes):
   layers = []*len(n_nodes)
   layers[0] = data
   for i in in range(1, n_nodes-1):
     hidden_i = make_hidden(n_nodes[i], n_nodes[i+1]
     layers[i] = tf.add(tf.matmul(layers[i-1], hidden_i['weight']), hidden_i['biases'])
     layers[i] = tf.nn.relu(layers[i])

   output_layer = make_output(n_nodes[-1], n_classes)
   output = tf.matmul(layers[-1], output_layer['weight']) + output_layer['biases']

   return output

您可能仍需要进行细微更改才能使代码正常工作。我希望你能够了解隐藏层的循环点。