训练时的ValueError Tensorflow:使用序列

时间:2016-02-17 04:50:20

标签: python tensorflow

我在使用ValueError时遇到了训练过程。 以下是详细信息:

我制作了如下标志。

flags = tf.app.flags
FLAGS = flags.FLAGS

flags.DEFINE_string('train', 'train.txt', 'File name of train data')
flags.DEFINE_string('test', 'test.txt', 'File name of train data')
flags.DEFINE_string('train_dir', '/tmp/data', 'Directory to put the training data.')
flags.DEFINE_integer('max_steps', 200, 'Number of steps to run trainer.')
flags.DEFINE_integer('batch_size', 10, 'Batch size'
                     'Must divide evenly into the dataset sizes.')
flags.DEFINE_float('learning_rate', 1e-4, 'Initial learning rate.')

此外,培训流程如下:

if 1==1:
        # Tensor for images
    images_placeholder = tf.placeholder("float", shape=(None, IMAGE_PIXELS))
        # Tensor for labels
        labels_placeholder = tf.placeholder("float", shape=(None, NUM_CLASSES))
        # dropout rate
        keep_prob = tf.placeholder("float")

        # call inference() 
        logits = inference(images_placeholder, keep_prob)
        # call loss()
        loss_value = loss(logits, labels_placeholder)
        # call training()
        train_op = training(loss_value, FLAGS.learning_rate)
        # calculate accuract
        acc = accuracy(logits, labels_placeholder)

        # prepare for saving
        saver = tf.train.Saver()
        # make Session
        sess = tf.Session()
        # initialize variables
        sess.run(tf.initialize_all_variables())
        # values on TensorBoard
        summary_op = tf.merge_all_summaries()
        summary_writer = tf.train.SummaryWriter(FLAGS.train_dir, sess.graph_def)

        # Training process
        for step in range(FLAGS.max_steps):
            for i in range(len(train_image)/FLAGS.batch_size):
                # batch_size
                batch = FLAGS.batch_size*i
                # define data in placeholder by feed dict
                sess.run(train_op, feed_dict={
                images_placeholder:train_image[batch:batch+FLAGS.batch_size],
        labels_placeholder: train_label[batch:batch+FLAGS.batch_size],
                keep_prob: 0.5})

当我运行此代码时,我遇到了以下错误。如何解决这个问题?

File "CNN_model.py", line 230, in <module>
    images_placeholder: train_image[batch:batch+FLAGS.batch_size],labels_placeholder: train_label[batch:batch+FLAGS.batch_size],keep_prob: 0.5})
File "/Library/Python/2.7/site-packages/tensorflow/python/client/session.py", line 334, in run
    np_val = np.array(subfeed_val, dtype=subfeed_t.dtype.as_numpy_dtype)
ValueError: setting an array element with a sequence.

我在train_image和train_label周围添加代码如下。

NUM_CLASSES = 5
IMAGE_SIZE = 599
IMAGE_PIXELS = IMAGE_SIZE*1*128

f = open("song_features.json")
 data = json.load(f)
 data = np.array(data)

 flatten_data = []
 flatten_label = []


 for line in range(len(data)):
     for_flat = np.array(data[line])
     flatten_data.append(for_flat.flatten().tolist())

     #label made as 1-of-K
     tmp = np.zeros(NUM_CLASSES)
     tmp[int(random.randint(0,4))] = 1
        flatten_label.append(tmp)

  #1 line training data
  train_image = np.asarray(flatten_data)
  train_label = np.asarray(flatten_label)

我构建了这个模型。

enter image description here

1 个答案:

答案 0 :(得分:3)

当TensorFlow将free中的值转换为密集的NumPy ndarrays时,会引发此异常,并取决于feed_dicttrain_image对象中的内容。

这些错误的常见原因是Feed值是 ragged 列表时:即列表列表,其中子列表具有不同的大小。例如:

train_label

编辑:感谢您分享代码以构建>>> train_image = [[1., 2., 3.], [4., 5.]] >>> np.array(train_image, dtype=np.float32) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: setting an array element with a sequence. train_image。我怀疑问题出在train_label的创建中,如果train_image的元素对于每个示例都有不同的长度。尝试进行以下修改以确认:flatten_data。如果您获得相同的train_image = np.asarray(flatten_data, dtype=np.float32),则需要填充或裁剪单个行,以便它们具有ValueError个元素。