简单神经网络上的张量流训练并不能提高准确性

时间:2017-11-07 18:45:05

标签: tensorflow

我得到了下面的网络,当我训练它时,准确度仍然是0.000。我试图通过仅包含2个样本来简化。除了其中一个样本外,输入均为零。样本之间的差异是,在零到位的情况下,输出类似于0.3 0.4 0.3,而在另一种情况下,其输出为0.4 0.3 0.3(两者总和为1)。我希望通过两个训练样本很容易获得至少50%的准确度和可能100%。

问题:我的网络配置有问题吗?如果没有,任何关于如何进行的建议。到目前为止,tensorflow对我来说不容易调试。

可能有相关性:我首先将权重和偏差初始化为零,然后得到0.5准确度。当我在训练后打印图层的内容时,只有外层的重量和偏差包含正值。

self.session = tf.Session()
n_hidden_1 = 10 # 1st layer number of neurons
n_hidden_2 = 10 # 2nd layer number of neurons
self.num_input = 68 # data values
self.num_classes = 18

self.weights = {
    'h1': tf.Variable(tf.random_normal([self.num_input, n_hidden_1])),
    'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
    'out': tf.Variable(tf.random_normal([n_hidden_2, self.num_classes]))
}
self.biases = {
    'b1': tf.Variable(tf.random_normal([n_hidden_1])),
    'b2': tf.Variable(tf.random_normal([n_hidden_2])),
    'out': tf.Variable(tf.random_normal([self.num_classes]))
}
self.input = tf.placeholder(dtype=tf.float32, shape = [None, self.num_input])
self.output = tf.placeholder(dtype=tf.float32, shape = [None, self.num_classes])
layer_1 = tf.nn.relu(tf.add(tf.matmul(self.input, self.weights['h1']), self.biases['b1']))
# Hidden fully connected layer with 256 neurons
layer_2 = tf.nn.relu(tf.add(tf.matmul(layer_1, self.weights['h2']), self.biases['b2']))
# Output fully connected layer with a neuron for each class
self.out_layer = tf.nn.softmax(tf.matmul(layer_2, self.weights['out']) + self.biases['out'])

self.loss_op = tf.reduce_mean(tf.squared_difference(self.out_layer, self.output))
optimizer = tf.train.AdamOptimizer(learning_rate=0.1)
self.train_op = optimizer.minimize(self.loss_op)

# Evaluate model
correct_pred = tf.equal(tf.argmax(self.out_layer, 1), tf.argmax(self.output, 1))
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
self.session.run(tf.global_variables_initializer())

def train(self,train_x,train_y):
    loss, acc = self.session.run([self.loss_op, self.accuracy], feed_dict={self.input: train_x, self.output: train_y})
    self.logger.info("Before training Loss= " + \
              "{:.4f}".format(loss) + ", Training Accuracy= " + \
              "{:.3f}".format(acc))

    self.session.run(self.train_op, feed_dict={self.input: train_x, self.output: train_y})
    loss, acc = self.session.run([self.loss_op, self.accuracy], feed_dict={self.input: train_x, self.output: train_y})
    self.logger.info("After training Loss= " + \
              "{:.4f}".format(loss) + ", Training Accuracy= " + \
              "{:.3f}".format(acc))

1 个答案:

答案 0 :(得分:1)

看起来您只运行一次train(...)。您需要在循环中调用session.run(train_op, feed_dict=...)

该调用只会对参数进行一次更新,这不会比随机初始化好得多。