如何使用Tensorflow数据集API在不评估文件名字符串的情况下读取具有不同名称的文件

时间:2018-02-21 01:15:45

标签: python tensorflow file-io

假设我收到的文件名为index_channel.csv的csv数据集文件,其中index是示例的索引(从1到10000),channel是频道的索引(从1到5运行)。所以7_3.csv是第7个例子的第3个频道。我想加载所有这些csv文件并连接通道以获得正确的张量作为我的数据集。我缺少对函数的引用,这将使我能够这样做。下面是我到目前为止的代码。当我开始运行时,它会抱怨TypeError: expected str, bytes or os.PathLike object, not Tensor。我猜它是在尝试评估表达式而不是仅在调用sess.run()之后,但不确定如何规避它。

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

# Imports
import numpy as np
import tensorflow as tf
from tensorflow.contrib.data import Dataset, Iterator

def main(unused_argv):
  train_imgs = tf.constant(["1","2","3"]) #just trying the 3 first examples
  tr_data = Dataset.from_tensor_slices((train_imgs))
  tr_data = tr_data.map(input_parser)

  # create TensorFlow Iterator object
  iterator = Iterator.from_structure(tr_data.output_types,
                                   tr_data.output_shapes)
  next_element = iterator.get_next()
  training_init_op = iterator.make_initializer(tr_data)
  with tf.Session() as sess:

    # initialize the iterator on the training data
    sess.run(training_init_op)
    # get each element of the training dataset until the end is reached
    while True:
        try:
            elem = sess.run(next_element)
            print(elem)
        except tf.errors.OutOfRangeError:
            print("End of training dataset.")
            break

def input_parser(index):
  dic={}
  for d in range(1,6):
    a=np.loadtxt(open("./data_for_tf/" + index +"_M"+str(d)+".csv", "rb"), delimiter=",", skiprows=1)
    dic[d]=tf.convert_to_tensor(a, dtype=tf.float32)
  metric=np.stack((dic[1],dic[2],dic[3])) 
  return metric

抱歉,我是TF的新手。我的问题似乎微不足道,但我通过谷歌搜索找到的例子都没有回答我的问题。

1 个答案:

答案 0 :(得分:2)

在我看来,错误是通过使用index生成错误的:

a=np.loadtxt(open("./data_for_tf/" + index +"_M"+str(d)+".csv", "rb"), delimiter=",", skiprows=1)

正如您所怀疑的那样,当TensorFlow设置其声明性模型时,您的input_parser只会被调用一次 - 这将建立TensorFlow操作之间的关系以供以后评估。然而,您的 Python 调用(例如numpy操作)会在此初始化期间立即运行。就在这时,np.loadtxt正在尝试使用尚未指定的TF操作构建一个字符串。

如果确实如此,您甚至不需要运行模型来生成错误(尝试删除sess.run())。

你会在https://www.tensorflow.org/programmers_guide/datasets#preprocessing_data_with_datasetmap的例子中注意到他们使用TF文件访问函数读取数据:

filenames = ["/var/data/file1.txt", "/var/data/file2.txt"]

dataset = tf.data.Dataset.from_tensor_slices(filenames)

# Use `Dataset.flat_map()` to transform each file as a separate nested dataset,
# and then concatenate their contents sequentially into a single "flat" dataset.
# * Skip the first line (header row).
# * Filter out lines beginning with "#" (comments).

dataset = dataset.flat_map(
    lambda filename: (
        tf.data.TextLineDataset(filename)
        .skip(1)
        .filter(lambda line: tf.not_equal(tf.substr(line, 0, 1), "#"))))

设计为声明性TF模型的一部分(即在运行时解析文件名)。

以下是使用TensorFlow操作读取文件的更多示例:

https://www.tensorflow.org/get_started/datasets_quickstart#reading_a_csv_file

也可以使用命令式Python函数(请参阅第一个链接中的“使用tf.py_func()应用任意Python逻辑”),但只有在没有其他选项时才建议这样做。

所以,基本上,除非你使用tf.py_fun()机制,否则你不能指望任何依赖于TF张量或操作的普通Python操作按预期工作。但是,它们可以用于循环结构以建立相互关联的TF操作。

更新:

这是一个示意图:

## For a simple example, I have four files <index>_<channel>_File.txt
## so, 1_1_File.txt, 1_2_File.txt

import tensorflow as tf

def input_parser(filename):
   filesWithChannels = []

   for i in range(1,3):
       channel_data =  tf.read_file(filename+'_'+str(i)+'_File.txt')

       ## Uncomment the two lines below to add csv parsing.
       # channel_data = tf.sparse_tensor_to_dense(tf.string_split([channel_data],'\n'), default_value='')
       # channel_data = tf.decode_csv(channel_data, record_defaults=[[1.],[1.]])

       filesWithChannels.append(channel_data)

   return tf.convert_to_tensor(filesWithChannels)


train_imgs = tf.constant(["1","2"]) # e.g.
tr_data = tf.data.Dataset.from_tensor_slices(train_imgs)
tr_data = tr_data.map(input_parser)

iterator = tr_data.make_one_shot_iterator()
next_element = iterator.get_next()

with tf.Session() as sess:
    for i in range(2) :
        out = sess.run(next_element)
        print(out) 

更新UPDATE(添加csv):

## For a simple example, I have four files <index>_<channel>_File.txt
## so, 1_1_File.txt, 1_2_File.txt

import tensorflow as tf

with tf.device('/cpu:0'):
    def input_parser(filename):
       filesWithChannels = []

       for i in range(1,3):
             channel_data = (tf.data.TextLineDataset(filename+'_'+str(i)+'_File.txt')
                               .map(lambda line: tf.decode_csv(line, record_defaults=[[1.],[1.]])))

             filesWithChannels.append(channel_data)

       return tf.data.Dataset.zip(tuple(filesWithChannels))

train_imgs = tf.constant(["1","2"]) # e.g.
tr_data = tf.data.Dataset.from_tensor_slices(train_imgs)
tr_data = tr_data.flat_map(input_parser)

iterator = tr_data.make_one_shot_iterator()
next_element = iterator.get_next()
next_tensor_element = tf.convert_to_tensor(next_element)

with tf.Session() as sess:
    for i in range(2) :
        out = sess.run(next_tensor_element)
        print(out) 

有关如何设置字段分隔符以及使用column_defaults指定列号和默认值的详细信息,请查看tf.decode_csv