Tensorflow图像分类器的准确性无法更改

时间:2018-10-12 16:30:25

标签: python tensorflow machine-learning computer-vision

我是Tensorflow的新手。我正在创建一个用于图像分类的简单的完全连接的神经网络。图像为(-1,224,224,3),标签为(-1,2)。但是,我的代码的结果是精度根本没有提高;即使更改了学习率,优化器和不同的测试集,它也保持在47%不变。-任何帮助将不胜感激!谢谢!

import matplotlib.pyplot as plt 
from util.MacOSFile import MacOSFile
import numpy as np
import _pickle as pickle
import tensorflow as tf

def pickle_load(file_path):
    with open(file_path, "rb") as f:
        return pickle.load(MacOSFile(f))

###hyperparameters###
batch_size = 32
iterations = 10

###loading training data start###
data = pickle_load('training.pickle')
x_train = []
y_train = []

for features, labels in data:
    x_train.append(features)
    y_train.append(labels)

x_train = np.array(x_train)
y_train = np.array(y_train)

###################################

###loading test data start###
data = pickle_load('testing.pickle')
x_test = []
y_test = []

for features, labels in data:
    x_test.append(features)
    y_test.append(labels)

x_test = np.array(x_test)
y_test = np.array(y_test)

###################################


###neural network###

x_s = tf.placeholder(tf.float32, [None, 224, 224, 3])
y_s = tf.placeholder(tf.float32, [None, 2])
x_image = tf.reshape(x_s, [-1, 150528])

W_1 = tf.Variable(tf.truncated_normal([150528, 8224]))
b_1 = tf.Variable(tf.zeros([8224]))
h_fc1 = tf.nn.relu(tf.matmul(x_image, W_1) + b_1)

W_2 = tf.Variable(tf.truncated_normal([8224, 1028]))
b_2 = tf.Variable(tf.zeros([1028]))
h_fc2 = tf.nn.relu(tf.matmul(h_fc1, W_2) + b_2)

W_3 = tf.Variable(tf.truncated_normal([1028, 2]))
b_3 = tf.Variable(tf.zeros([2]))
prediction = tf.nn.softmax(tf.matmul(h_fc2, W_3) + b_3)

cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(labels=y_s, logits=prediction)
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(cross_entropy)
init = tf.global_variables_initializer()

###neural network end###


with tf.Session() as sess:
    sess.run(init)

    train_sample_size = len(data) #how many data points?
    max_batches_in_data = int(train_sample_size/batch_size) #max number of batches possible; 623 

    for iteration in range(iterations):
            print('Iteration ', iteration)
            epoch = int(iteration/max_batches_in_data)
            start_idx = (iteration-epoch*max_batches_in_data)*batch_size
            end_idx = (iteration+1 - epoch*max_batches_in_data)*batch_size
            mini_x_train = x_train[start_idx: end_idx] 
            mini_y_train = y_train[start_idx: end_idx]

            ##actual training is here
            sess.run(train_step, feed_dict={x_s: mini_x_train, y_s: mini_y_train})

            #test accuracy#
            y_pre = sess.run(prediction, feed_dict={x_s: x_train[:100]})
            correct_prediction = tf.equal(tf.argmax(y_pre,1), tf.argmax(y_train[:100], 1))
            accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
            result = sess.run(accuracy, feed_dict={x_s: x_train[:100], y_s: y_train[:100]})
            print("Result: {0}".format(result))

1 个答案:

答案 0 :(得分:0)

我作了一些观察,首先,您的代码有些过时,您不必手动设置完全连接的层,为此,有一点需要注意:dense layers。 如果加载图像,为什么还不使用卷积层呢? 我还建议adam optimizer仅将参数保留为其默认值。我希望我能有所帮助:)