Tensorflow和feed_dict以及批量训练集

时间:2016-08-20 20:16:48

标签: tensorflow

是否有任何智能方法可以使用feed_dict并动态创建批次。引擎盖下是否有任何可能有用的东西。所以我的训练数据被加载到列表中,但没有被批量处理。有没有聪明的方法可以随意使用feed_dict选择批处理,而无需预先批处理数据。

例如我有:

        private void Form1_Load(object sender, EventArgs e)
    {
        client = new Skype();
        client.MessageStatus += Client_MessageStatus;
        AttachToSkype();
    }
        private void Client_MessageStatus(ChatMessage pMessage, TChatMessageStatus Status)
    {
        MessageBox.Show(pMessage.Sender.Handle + " Says: " + pMessage.Body);
    }

其中X和Y是标准NN的输入和输出,X的长度是训练样本的数量。人们建议创建批次的内容是什么?

这个,我认为下面可能会有诀窍,但必须要有更优雅的东西吗?

for i in range(N_STEPS):
        sess.run(train_step, feed_dict={x_: X, y_: Y})

1 个答案:

答案 0 :(得分:1)

Google创建的udacity的tensorflow课程使用以下批处理

for step in range(num_steps):
    # Pick an offset within the training data, which has been randomized.
    # Note: we could use better randomization across epochs.
    offset = (step * batch_size) % (train_labels.shape[0] - batch_size)

    # Generate a minibatch.
    batch_data = train_dataset[offset:(offset + batch_size), :]
    batch_labels = train_labels[offset:(offset + batch_size), :]

    # Prepare a dictionary telling the session where to feed the minibatch.
    # The key of the dictionary is the placeholder node of the graph to be fed,
    # and the value is the numpy array to feed to it.
    feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}

使用tf_train_dataset定义tf_train_labelstf.placeholder