如何准备培训数据和预测数据?

时间:2017-03-25 06:49:06

标签: python tensorflow

我是TensorFlow和机器学习的新手(也是python)。 在创建图像识别程序的第一步,我在喂食数据准备时遇到了困惑。有人可以帮我这个吗? 我查看了本教程,但数据准备工作是混淆的。 mnis softmax for beginner

我没想到从这个问题中得到一个完整的完美程序,相反我很想听听你是否可以告诉我TensorFlow如何在feed_dict上工作。现在在我看来,它是:“工作就像一个[for]循环,通过imageHolder,得到2352字节/ 1图像的数据并放入训练操作,在那里它执行基于当前模型的预测并与之比较来自同一指数的labelHolder的数据然后对模型进行修正。“所以我期望放入一组2352字节的数据(另一个相同大小的图像)并得到预测。我也会把代码放在这里,以防我的想法是正确的,错误来自糟糕的实现。

说:我有5个班级的掀起数据,总共有3670个图像。 将数据加载到feed_dict进行训练时,我已将所有图像转换为28x28像素,包含3个通道。它导致feed_dict中图像持有者的张量为(3670,2352)。之后,我设法在feed_dict中为标签持有者准备了一个(3670,)的张量。 培训代码如下所示:

for step in xrange(FLAGS.max_steps):
        feed_dict = {
            imageHolder: imageTrain,
            labelHolder: labelTrain,
        }
        _, loss_rate = sess.run([train_op, loss_op], feed_dict=feed_dict)

然后我的代码用上面的模型预测新图像:

testing_dataset = do_get_file_list(FLAGS.guess_dir)
x = tf.placeholder(tf.float32, shape=(IMAGE_PIXELS))
for data in testing_dataset:
    image = Image.open(data)
    image = image.resize((IMAGE_SIZE, IMAGE_SIZE))
    image = np.array(image).reshape(IMAGE_PIXELS)
    prediction = session.run(tf.argmax(logits, 1), feed_dict={x: image})

但问题是,无论我的测试数据是什么形状(2352,),(1,2352),预测线总是会引发“无法提供形状值......”的错误(它要求( 3670,2352)形状,但没办法)

这是我用过的一些旗帜

IMAGE_SIZE = 28
CHANNELS = 3
IMAGE_PIXELS = IMAGE_SIZE * IMAGE_SIZE * CHANNELS

培训操作和损失计算:

def do_get_op_compute_loss(logits, labels):
    labels = tf.to_int64(labels)
    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits, name='xentropy')
    loss = tf.reduce_mean(cross_entropy, name='xentropy_mean')
    return loss

def do_get_op_training(loss_op, training_rate):
    optimizer = tf.train.GradientDescentOptimizer(FLAGS.learning_rate)
    global_step = tf.Variable(0, name='global_step', trainable=False)
    train_op = optimizer.minimize(loss_op, global_step=global_step)
    return train_op

变量

imageHolder = tf.placeholder(tf.float32, [data_count, IMAGE_PIXELS])
labelHolder = tf.placeholder(tf.int32, [data_count])

完整的程序:

import os
import math
import tensorflow as tf
from PIL import Image
import numpy as np
from six.moves import xrange

flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_float('learning_rate', 0.01, 'Initial learning rate.')
flags.DEFINE_integer('max_steps', 200, 'Number of steps to run trainer.')
flags.DEFINE_integer('hidden1', 128, 'Number of units in hidden layer 1.')
flags.DEFINE_integer('hidden2', 32, 'Number of units in hidden layer 2.')
flags.DEFINE_integer('batch_size', 4, 'Batch size.  '
                     'Must divide evenly into the dataset sizes.')
flags.DEFINE_string('train_dir', 'data', 'Directory to put the training data.')
flags.DEFINE_string('save_file', '.\\data\\model.ckpt', 'Directory to put the training data.')
flags.DEFINE_string('guess_dir', 'work', 'Directory to put the testing data.')
#flags.DEFINE_boolean('fake_data', False, 'If true, uses fake data '
#                    'for unit testing.')

IMAGE_SIZE = 28
CHANNELS = 3
IMAGE_PIXELS = IMAGE_SIZE * IMAGE_SIZE * CHANNELS

def do_inference(images, hidden1_units, hidden2_units, class_count):
    #HIDDEN LAYER 1
    with tf.name_scope('hidden1'):
        weights = tf.Variable(
            tf.truncated_normal([IMAGE_PIXELS, hidden1_units], stddev=1.0 / math.sqrt(float(IMAGE_PIXELS))),
            name='weights')
        biases = tf.Variable(tf.zeros([hidden1_units]), name='biases')
        hidden1 = tf.nn.relu(tf.matmul(images, weights) + biases)
    #HIDDEN LAYER 2
    with tf.name_scope('hidden1'):
        weights = tf.Variable(
            tf.truncated_normal([hidden1_units, hidden2_units], stddev=1.0 / math.sqrt(float(hidden1_units))),
            name='weights')
        biases = tf.Variable(tf.zeros([hidden2_units]), name='biases')
        hidden2 = tf.nn.relu(tf.matmul(hidden1, weights) + biases)
    #LINEAR
    with tf.name_scope('softmax_linear'):
        weights = tf.Variable(
            tf.truncated_normal([hidden2_units, class_count], stddev=1.0 / math.sqrt(float(hidden2_units))),
            name='weights')
        biases = tf.Variable(tf.zeros([class_count]), name='biases')
        logits = tf.matmul(hidden2, weights) + biases
    return logits

def do_get_op_compute_loss(logits, labels):
    labels = tf.to_int64(labels)
    cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits, name='xentropy')
    loss = tf.reduce_mean(cross_entropy, name='xentropy_mean')
    return loss

def do_get_op_training(loss_op, training_rate):
    optimizer = tf.train.GradientDescentOptimizer(FLAGS.learning_rate)
    global_step = tf.Variable(0, name='global_step', trainable=False)
    train_op = optimizer.minimize(loss_op, global_step=global_step)
    return train_op

def do_get_op_evaluate(logits, labels):
    correct = tf.nn.in_top_k(logits, labels, 1)
    return tf.reduce_sum(tf.cast(correct, tf.int32))

def do_evaluate(session, eval_correct_op, imageset_holder, labelset_holder, train_images, train_labels):
    true_count = 0
    num_examples = FLAGS.batch_size * FLAGS.batch_size
    for step in xrange(FLAGS.batch_size):
        feed_dict = {imageset_holder: train_images, labelset_holder: train_labels,}
        true_count += session.run(eval_correct_op, feed_dict=feed_dict)
        precision = true_count / num_examples
    # print('  Num examples: %d  Num correct: %d  Precision @ 1: %0.04f' %
        # (num_examples, true_count, precision))

def do_init_param(data_count, class_count): 
    # Generate placeholder
    imageHolder = tf.placeholder(tf.float32, shape=(data_count, IMAGE_PIXELS))
    labelHolder = tf.placeholder(tf.int32, shape=(data_count))

    # Build a graph for prediction from inference model
    logits = do_inference(imageHolder, FLAGS.hidden1, FLAGS.hidden2, class_count)

    # Add loss calculating op
    loss_op = do_get_op_compute_loss(logits, labelHolder)

    # Add training op
    train_op = do_get_op_training(loss_op, FLAGS.learning_rate)

    # Add evaluate correction op
    evaluate_op = do_get_op_evaluate(logits, labelHolder)

    # Create session for op operating
    sess = tf.Session()

    # Init param
    init = tf.initialize_all_variables()
    sess.run(init)
    return sess, train_op, loss_op, evaluate_op, imageHolder, labelHolder, logits

def do_get_class_list():
    return [{'name': name, 'path': os.path.join(FLAGS.train_dir, name)} for name in os.listdir(FLAGS.train_dir)
            if os.path.isdir(os.path.join(FLAGS.train_dir, name))]

def do_get_file_list(folderName):
    return [os.path.join(folderName, name) for name in os.listdir(folderName)
            if (os.path.isdir(os.path.join(folderName, name)) == False)]

def do_init_data_list():
    file_list = []
    for classItem in do_get_class_list():
        for dataItem in do_get_file_list(classItem['path']):
            file_list.append({'name': classItem['name'], 'path': dataItem})

    # Renew data feeding dictionary
    imageTrainList, labelTrainList = do_seperate_data(file_list)
    imageTrain = []
    for imagePath in imageTrainList:
        image = Image.open(imagePath)
        image = image.resize((IMAGE_SIZE, IMAGE_SIZE))
        imageTrain.append(np.array(image))

    imageCount = len(imageTrain)
    imageTrain = np.array(imageTrain)
    imageTrain = imageTrain.reshape(imageCount, IMAGE_PIXELS)

    id_list, id_map = do_generate_id_label(labelTrainList)
    labelTrain = np.array(id_list)
    return imageTrain, labelTrain, id_map

def do_init():
    imageTrain, labelTrain, id_map = do_init_data_list()
    sess, train_op, loss_op, evaluate_op, imageHolder, labelHolder, logits = do_init_param(len(imageTrain), len(id_map))
    return sess, train_op, loss_op, evaluate_op, imageHolder, labelHolder, imageTrain, labelTrain, id_map, logits

def do_seperate_data(data):
    images = [item['path'] for item in data]
    labels = [item['name'] for item in data]
    return images, labels

def do_generate_id_label(label_list):
    trimmed_label_list = list(set(label_list))
    id_map = {trimmed_label_list.index(label): label for label in trimmed_label_list}
    reversed_id_map = {label: trimmed_label_list.index(label) for label in trimmed_label_list}
    id_list = [reversed_id_map.get(item) for item in label_list]
    return id_list, id_map

def do_training(sess, train_op, loss_op, evaluate_op, imageHolder, labelHolder, imageTrain, labelTrain):
    # Training state checkpoint saver
    saver = tf.train.Saver()
    # feed_dict = {
        # imageHolder: imageTrain,
        # labelHolder: labelTrain,
    # }

    for step in xrange(FLAGS.max_steps):
        feed_dict = {
            imageHolder: imageTrain,
            labelHolder: labelTrain,
        }
        _, loss_rate = sess.run([train_op, loss_op], feed_dict=feed_dict)

        if step % 100 == 0:
            print('Step {0}: loss = {1}'.format(step, loss_rate))
        if (step + 1) % 1000 == 0 or (step + 1) == FLAGS.max_steps:
            saver.save(sess, FLAGS.save_file, global_step=step)
            print('Evaluate training data')
            do_evaluate(sess, evaluate_op, imageHolder, labelHolder, imageTrain, labelTrain)

def do_predict(session, logits):
    # xentropy
    testing_dataset = do_get_file_list(FLAGS.guess_dir)
    x = tf.placeholder(tf.float32, shape=(IMAGE_PIXELS))
    print('Perform predict')
    print('==================================================================================')
    # TEMPORARY CODE
    for data in testing_dataset:
        image = Image.open(data)
        image = image.resize((IMAGE_SIZE, IMAGE_SIZE))
        image = np.array(image).reshape(IMAGE_PIXELS)
        print(image.shape)
        prediction = session.run(logits, {x: image})
        print('{0}: {1}'.format(data, prediction))

def main(_):
    # TF notice default graph
    with tf.Graph().as_default():
        sess, train_op, loss_op, evaluate_op, imageHolder, labelHolder, imageTrain, labelTrain, id_map, logits = do_init()
        print("done init")
        do_training(sess, train_op, loss_op, evaluate_op, imageHolder, labelHolder, imageTrain, labelTrain)
        print("done training")
        do_predict(sess, logits)

# NO IDEA
if __name__ == '__main__':
    tf.app.run()

1 个答案:

答案 0 :(得分:0)

理解错误很重要,你说

  

但问题是预测线总是引发错误“不能   塑料的形状......“无论我的测试数据是什么形状   (2352,),(1,2352)(它要求(3670,2352)形状,但没办法)

哦,是的,我的朋友,是的。它说你的形状有问题,你需要检查一下。它要求3670,为什么?

因为您的模型接受带有形状的输入(data_count,IMAGE_PIXELS),您在下面声明:

def do_init_param(data_count, class_count): 
    # Generate placeholder
    imageHolder = tf.placeholder(tf.float32, shape=(data_count, IMAGE_PIXELS))
    labelHolder = tf.placeholder(tf.int32, shape=(data_count))

此函数在此处调用:

sess, train_op, loss_op, evaluate_op, imageHolder, labelHolder, logits = do_init_param(len(imageTrain), len(id_map))

len(imageTrain)是数据集的长度,可能是3670张图片。

然后你有预测功能:

def do_predict(session, logits):
    # xentropy
    testing_dataset = do_get_file_list(FLAGS.guess_dir)
    x = tf.placeholder(tf.float32, shape=(IMAGE_PIXELS))
    ...
    prediction = session.run(logits, {x: image})

注意x这里没用。你正在喂你的图像以预测你的模型不期望这种形状,它期望原始的占位符形状(3670,2352),因为这就是你所说的。

解决方案是将x声明为具有非特定第一维的占位符,例如:

imageHolder = tf.placeholder(tf.float32, shape=(None, IMAGE_PIXELS))

当您预测图像的标签时,您可以拥有单个图像或多个图像(小批量),但始终必须具有形状[number_images,IMAGE_PIXELS]。

有道理吗?