为什么我的简单神经网络没有学习?

时间:2017-01-02 07:21:32

标签: python machine-learning tensorflow neural-network

我是TensorFlow和神经网络的新手。我正在尝试构建一个可以对CIFAR-10数据集中的图像进行分类的神经网络。

这是我的代码:

import tensorflow as tf
import pickle
import numpy as np
import random

image_size= 32*32*3 # because 3 channels
n_classes = 10
lay1_size = 50
batch_size = 100

def unpickle(filename):
    with open(filename,'rb') as f:
        data = pickle.load(f, encoding='latin1')
    x = data['data']
    y = data['labels']
    # shuffle the data
    z = list(zip(x,y))
    random.shuffle(z)
    x, y = zip(*z)
    x = x[:batch_size]
    y = y[:batch_size]
    # covert decimals to one hot arrays
    y = np.eye(n_classes)[[y]]
    return x, y

# set up network
def add_layer(inputs, in_size, out_size, activation_function=None):
    W = tf.Variable(tf.random_normal([in_size, out_size]), dtype=tf.float32)
    b = tf.Variable(tf.zeros([1,out_size]) + 0.1, dtype=tf.float32)
    Wx_plus_b = tf.matmul(inputs, W) + b
    if activation_function is None:
        output = Wx_plus_b
    else:
        output = activation_function(Wx_plus_b)
    return output

def compute_accuracy(v_xs, v_ys):
    global prediction
    y_pre = sess.run(prediction, feed_dict={xs:v_xs})
    correct_prediction = tf.equal(tf.argmax(y_pre,1), tf.argmax(v_ys, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    result = sess.run(accuracy, feed_dict={xs:v_xs, ys:v_ys})
    return result

xs = tf.placeholder(tf.float32, [None,image_size])
ys = tf.placeholder(tf.float32)

lay1 = add_layer(xs, image_size, lay1_size, activation_function=tf.nn.tanh)

lay2 = add_layer(lay1, lay1_size, lay1_size, activation_function=tf.nn.tanh)

prediction = add_layer(lay2, lay1_size, n_classes, activation_function=tf.nn.softmax)

cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys*tf.log(prediction), reduction_indices=[1]))

train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
#train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
# run network
sess = tf.Session()
sess.run(tf.initialize_all_variables())

x_test, y_test = unpickle('test_batch')
for i in range(1000):
    x_train, y_train = unpickle('data_batch_1')
    sess.run(train_step, feed_dict={xs:x_train,ys:y_train})
    if i % 50 == 0:
        print(compute_accuracy(x_test, y_test)) 
sess.close()

我使用两个隐藏层,每层有50个节点。我正在运行1,000个循环,在每个循环中我将数据集中的数据混洗并选择该洗牌的前100个图像进行训练。

我一直得到~0.1准确度,机器根本就没有学习。

当我修改代码以使用MNIST数据集而不是CIFAR-10数据集时,我得到了〜0.87的准确度。

我从MNIST教程中获取代码,并尝试对其进行修改以对CIFAR-10数据进行分类。

我无法弄清楚这里有什么问题。如何让我的算法学习?

0 个答案:

没有答案
相关问题