如何将数据集放入数组

时间:2019-04-19 09:58:29

标签: csv numpy tensorflow artificial-intelligence

我已经完成了所有教程,并搜索了“ load csv tensorflow”,但无法理解所有逻辑。我不是一个完全的初学者,但是我没有太多时间来完成此操作,而且我突然被投入Tensorflow中,这出乎意料地困难。

让我说明一下:

非常简单的CSV文件,共有184列,均为浮点数。连续只是今天的价格,三个买入信号和之前的180天价格

close = tf.placeholder(float, name='close')

signals = tf.placeholder(bool, shape=[3], name='signals')

previous = tf.placeholder(float, shape=[180], name = 'previous')

本文:https://www.tensorflow.org/guide/datasets 它涵盖了如何很好地加载。它甚至有一节介绍了如何更改为numpy数组,这是我需要训练和测试'net的内容。但是,正如作者在通往该网页的文章中所说的那样,它非常复杂。似乎一切都准备好进行数据操作了,我们已经对数据进行了规范化处理(自1983年以来,在AI方面,输入,输出和层方面都没有什么改变)。

这里是一种加载方式,但无法加载到Numpy中,也没有不处理数据的示例。

 with tf.Session as sess:

  sess.run( tf.global variables initializer())

  with open('/BTC1.csv') as csv_file:

    csv_reader = csv.reader(csv_file, delimiter =',')

    line_count = 0

    for row in csv_reader:

      ?????????

      line_count += 1

我需要知道如何将csv文件放入

close = tf.placeholder(float, name='close')

signals = tf.placeholder(bool, shape=[3], name='signals')

previous = tf.placeholder(float, shape=[180], name = 'previous')

以便我可以按照教程来训练和测试网络。

1 个答案:

答案 0 :(得分:1)

对我来说你的问题还不清楚。您可能正在回答,如果我错了,请告诉我,如何在模型中输入数据?有几种方式可以做到这一点。

  1. 在会话期间将占位符与feed_dict一起使用。这是基本且容易的方法,但经常会遇到培训绩效问题。进一步说明,请检查此post
  2. 使用队列。我不建议很难实施并且记录不好,因为它已经被第三种方法所取代。
  3. tf.data API。

...

因此,可以通过第一种方法回答您的问题

# get your array outside the session
with open('/BTC1.csv') as csv_file:
    csv_reader = csv.reader(csv_file, delimiter =',')
    dataset = np.asarray([data for data in csv_reader])
    close_col = dataset[:, 0]
    signal_cols = dataset[:, 1: 3]
    previous_cols = dataset[:, 3:]

# let's say you load 100 row each time for training
batch_size = 100

# define placeholders like you
...

with tf.Session() as sess:
    ...
    for i in range(number_iter):
        start = i * batch_size
        end = (i + 1) * batch_size
        sess.run(train_operation, feed_dict={close: close_col[start: end, ],
                                             signals: signal_col[start: end, ],
                                             previous: previous_col[start: end, ]
                                             }
                 )

通过第三种方法

# retrieve your columns like before
...

# let's say you load 100 row each time for training
batch_size = 100

# construct your input pipeline
c_col, s_col, p_col = wrapper(filename)
batch = tf.data.Dataset.from_tensor_slices((close_col, signal_col, previous_col))
batch = batch.shuffle(c_col.shape[0]).batch(batch_size)  #mix data --> assemble batches --> prefetch to RAM and ready inject to model
iterator = batch.make_initializable_iterator()
iter_init_operation = iterator.initializer
c_it, s_it, p_it = iterator.get_next() #get next batch operation automatically called at each iteration within the session

# replace your close, signal, previous placeholder in your model by c_it, s_it, p_it when you define your model
...

with tf.Session() as sess:
    # you need to initialize the iterators
    sess.run([tf.global_variable_initializer, iter_init_operation])
    ...
    for i in range(number_iter):
        start = i * batch_size
        end = (i + 1) * batch_size
        sess.run(train_operation)

祝你好运!