如何通过Tensorflow中的Softmax回归读取csv文件并训练数据

时间:2017-05-13 06:20:47

标签: tensorflow tensorflow-serving tensor prettytensor

我刚开始研究Tensorflow,我在训练数据时遇到了一个问题。 我的问题是阅读csv文件,然后使用softmax分类根据他们的学习时间和出勤率来评估学生的成绩(A,B或C)。

Grade of student

我定义,然后将csv文件加载为

Security.h

之后我定义了功能的张量,并像这样标记

COLUMNS = ["studytime", "attendance", "A", "B", "C"]
FEATURES = ["studytime", "attendance"]
LABEL = ["A", "B", "C"]
training_set = pd.read_csv("hw1.csv", skipinitialspace=True,
                       skiprows=1, names=COLUMNS)

然后我按照以下方式训练softmax和MNIST数据 Tensorflow for MNIST

但我不知道如何定义 batch_xs batch_ys 来训练此循环

feature_cols = [tf.contrib.layers.real_valued_column(k) for k in FEATURES]
labels = [tf.contrib.layers.real_valued_column(k) for k in LABEL]

如果他们的学习和参与时间如何,我如何定义三个学生的分数估计的功能,例如,[11,7],[3,4],[1,0]

你能帮我弄清楚这个问题吗?

提前致谢,

2 个答案:

答案 0 :(得分:0)

您好像正在将CSV读入DataFrame?您当然可以手动实现批处理过程,但有一种有效的内置方法可以在TF中构建队列和批处理。它有点复杂,但它适用于顺序或随机改组提供行,这非常方便。只需确保您的行长度相等,这样您就可以轻松指定哪些卖出代表Xes,哪些卖出代表Y.

您需要的两个功能是tf.decode_csvtf.train.shuffle_batch(或tf.train.batch,如果您不需要随机改组)。

我们在这篇文章中详细讨论了这个问题,其中包括一个完整的代码示例: TF CSV Batching Example

看起来你的数据都是数字的,Y是单热格式的,所以MNIST的例子应该适合实现你的估算函数。

***更新: 这大致是操作的顺序: 1.定义链接示例中显示的两个函数 - 一个用于逐行读取CSV文件,另一个用于将每个行打包成N个批次(随机或顺序) 2.通过while not coord.should_stop():开始阅读循环此循环将一直运行,直到它耗尽您提供给队列的所有CSV文件的内容 3.在循环的每次迭代中,对这些变量执行sess.run会为您提供批量的X和Y,以及您可能希望从CSV文件的每一行获得的任何额外的元类型内容,例如日期 - 在这个例子中的标签(在你的情况下,它可能是学生的名字或其他:

dateLbl_batch, feature_batch, label_batch = sess.run([dateLbl, features, labels])   

当TF到达文件的末尾时,它会抛出一个异常,这就是为什么上面所有代码都在try / catch块中 - 通过捕获该异常,你就知道你已经完成了。

上述功能为您提供了对CSV文件进行非常精细的逐个单元级别访问,并允许您将它们批量分批为N批次,分成您想要的历元数等。

*****更新2 **

以下是您应该以您的格式批量读取CSV文件的完整代码。它只是打印每批的内容。从这里,您可以轻松地将此代码与实际执行培训等的代码相关联。

import tensorflow as tf

fileName = 'data/study.csv'

try_epochs = 1
batch_size = 3

S = 1 # this is your Student label
F = 2 # this is the list of your features
L = 3 # this is one-hot vector of 3 representing the label

# set defaults to something (TF requires defaults for the number of cells you are going to read)
rDefaults = [['a'] for row in range((S+F+L))]

# function that reads the input file, line-by-line
def read_from_csv(filename_queue):
    reader = tf.TextLineReader(skip_header_lines=True) # skipt the header line
    _, csv_row = reader.read(filename_queue) # read one line
    data = tf.decode_csv(csv_row, record_defaults=rDefaults) # use defaults for this line (in case of missing data)
    studentLbl = tf.slice(data, [0], [S]) # first cell is my 'date-label' for internal pruposes
    features = tf.string_to_number(tf.slice(data, [S], [F]), tf.float32) # cells 2-480 is the list of features
    label = tf.string_to_number(tf.slice(data, [S+F], [L]), tf.float32) # the remainin 3 cells is the list for one-hot label
    return studentLbl, features, label

# function that packs each read line into batches of specified size
def input_pipeline(fName, batch_size, num_epochs=None):
    filename_queue = tf.train.string_input_producer(
        [fName],
        num_epochs=num_epochs,
        shuffle=True)  # this refers to multiple files, not line items within files
    dateLbl, features, label = read_from_csv(filename_queue)
    min_after_dequeue = 10000 # min of where to start loading into memory
    capacity = min_after_dequeue + 3 * batch_size # max of how much to load into memory
    # this packs the above lines into a batch of size you specify:
    dateLbl_batch, feature_batch, label_batch = tf.train.shuffle_batch(
        [dateLbl, features, label],
        batch_size=batch_size,
        capacity=capacity,
        min_after_dequeue=min_after_dequeue)
    return dateLbl_batch, feature_batch, label_batch

# these are the student label, features, and label:
studentLbl, features, labels = input_pipeline(fileName, batch_size, try_epochs)

with tf.Session() as sess:

    gInit = tf.global_variables_initializer().run()
    lInit = tf.local_variables_initializer().run()

    coord = tf.train.Coordinator()
    threads = tf.train.start_queue_runners(coord=coord)

    try:
        while not coord.should_stop():
            # load student-label, features, and label as a batch:
            studentLbl_batch, feature_batch, label_batch = sess.run([studentLbl, features, labels])

            print(studentLbl_batch);
            print(feature_batch);
            print(label_batch);
            print('----------');

    except tf.errors.OutOfRangeError:
        print("Done looping through the file")

    finally:
        coord.request_stop()

    coord.join(threads)

假设您的CSV文件如下所示:

name    studytime   attendance  A   B   C
S1  2   1   0   1   0
S2  3   2   1   0   0
S3  4   3   0   0   1
S4  3   5   0   0   1
S5  4   4   0   1   0
S6  2   1   1   0   0

上面的代码应该打印以下输出:

[[b'S5']
 [b'S6']
 [b'S3']]
[[ 4.  4.]
 [ 2.  1.]
 [ 4.  3.]]
[[ 0.  1.  0.]
 [ 1.  0.  0.]
 [ 0.  0.  1.]]
----------
[[b'S2']
 [b'S1']
 [b'S4']]
[[ 3.  2.]
 [ 2.  1.]
 [ 3.  5.]]
[[ 1.  0.  0.]
 [ 0.  1.  0.]
 [ 0.  0.  1.]]
----------
Done looping through the file

因此,不要打印批次的内容,只需将它们用作X和Y,以便在feed_dict

中进行培训

答案 1 :(得分:0)

这是我的尝试。但准确性并不像我预期的那么高。

import tensorflow as tf

fileName = 'hw1.csv'

try_epochs = 1
batch_size = 8

S = 1 # this is your Student label
F = 2 # this is the list of your features
L = 3 # this is one-hot vector of 3 representing the label

# set defaults to something (TF requires defaults for the number of cells you are going to read)
rDefaults = [['a'] for row in range((S+F+L))]

# function that reads the input file, line-by-line
def read_from_csv(filename_queue):
     reader = tf.TextLineReader(skip_header_lines=True) # skipt the header line
     _, csv_row = reader.read(filename_queue) # read one line
     data = tf.decode_csv(csv_row, record_defaults=rDefaults) # use defaults for this line (in case of missing data)
     studentLbl = tf.slice(data, [0], [S]) # first cell is my 'date-label' for internal pruposes
     features = tf.string_to_number(tf.slice(data, [S], [F]), tf.float32) # cells 2-480 is the list of features
     label = tf.string_to_number(tf.slice(data, [S+F], [L]), tf.float32) # the remainin 3 cells is the list for one-hot label
     return studentLbl, features, label

# function that packs each read line into batches of specified size
def input_pipeline(fName, batch_size, num_epochs=None):
    filename_queue = tf.train.string_input_producer(
       [fName],
       num_epochs=num_epochs,
       shuffle=True)  # this refers to multiple files, not line items within files
    dateLbl, features, label = read_from_csv(filename_queue)
    min_after_dequeue = 10000 # min of where to start loading into memory
    capacity = min_after_dequeue + 3 * batch_size # max of how much to load into memory
    # this packs the above lines into a batch of size you specify:
    dateLbl_batch, feature_batch, label_batch = tf.train.shuffle_batch(
       [dateLbl, features, label],
       batch_size=batch_size,
       capacity=capacity,
       min_after_dequeue=min_after_dequeue)
    return dateLbl_batch, feature_batch, label_batch

# these are the student label, features, and label:
studentLbl, features, labels = input_pipeline(fileName, batch_size, 
 try_epochs)

x = tf.placeholder(tf.float32, [None, 2])

W = tf.Variable(tf.zeros([2, 3]))

b = tf.Variable(tf.zeros([3]))

y = tf.nn.softmax(tf.matmul(x, W) + b)

y_ = tf.placeholder(tf.float32, [None, 3])

cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_,logits=y))

train_step = tf.train.GradientDescentOptimizer(0.1).minimize(cross_entropy)


sess = tf.InteractiveSession()

tf.global_variables_initializer().run()


with tf.Session() as sess:

   gInit = tf.global_variables_initializer().run()
   lInit = tf.local_variables_initializer().run()

   coord = tf.train.Coordinator()
   threads = tf.train.start_queue_runners(coord=coord)

   try:
      while not coord.should_stop():
        # load student-label, features, and label as a batch:
        studentLbl_batch, feature_batch, label_batch = sess.run([studentLbl, features, labels])

        print(studentLbl_batch);
        print(feature_batch);
        print(label_batch);
        print('----------');
        batch_xs = feature_batch
        batch_ys = label_batch
        sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})  # feeding data

  except tf.errors.OutOfRangeError:
     print("Done looping through the file")

  finally:
     coord.request_stop()

  coord.join(threads)


  correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))

  accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

  print(sess.run(accuracy, feed_dict={x: feature_batch, y_: label_batch}))

  print(sess.run(W))
  print(sess.run(b))

准确度

  0.375

W,B

    [[ 0.00555556  0.00972222 -0.01527778] [ 0.00555556  0.01388889 -0.01944444]]
    [-0.00277778  0.00138889  0.00138889]