无法输入形状值:张量尺寸误解

时间:2019-09-02 20:21:30

标签: tensorflow tensor

我一直在浏览一些tensorflow教程,并且正在整理一个宠物实验。但是,我遇到了一些尺寸错误,并且似乎可以找出它们。

我的目标:我有一个1xN形状的输入矩阵。我有一个尺寸为10xN的训练集。 (1和10是任意选择的)。 N用于表示训练集中的N个样本:1个输入值映射到一个输出向量。您可以将其视为1个输入神经元和m个输出神经元。训练集是映射到1d向量的这些单个值的集合。我希望通过针对这些网络运行这些映射的输入和输出的集合并减少错误来训练网络。

我要完成的简单算法:

  1. 对于输入向量中的每个值
    1. 使用该值加载输入神经元
    2. 转发
    3. 根据相应向量进行评估

重复以最小化错误。

但是,我似乎对如何格式化数据以馈送到网络感到困惑。我有1个输入神经元和n个输出神经元之一的占位符。我想遵循上述算法,但不确定执行是否正确:

# Data parameters

num_frames = 10

stimuli_value_low = .00001
stimuli_value_high = 100

pixel_value_low = .00001
pixel_value_high = 256.0

stimuli_dimension = 1
frame_dimension = 10

stimuli = np.random.uniform(stimuli_value_low, stimuli_value_high, (stimuli_dimension, num_frames))
frames = np.random.uniform(pixel_value_low, pixel_value_high, (frame_dimension, num_frames))

# Parameters
learning_rate = 0.01
training_iterations = 1000
display_iteration = 10

# Network Parameters
n_hidden_1 = 100
n_hidden_2 = 100
num_input_neurons = stimuli_dimension
num_output_neurons = frame_dimension

# Create placeholders
input_placeholder = tf.placeholder("float", [None, num_input_neurons])
output_placeholder = tf.placeholder("float", [None, num_output_neurons])

# Store layers weight & bias
weights = {
    'h1': tf.Variable(tf.random_normal([num_input_neurons, n_hidden_1])),
    'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
    'out': tf.Variable(tf.random_normal([n_hidden_2, num_output_neurons]))
}
biases = {
    'b1': tf.Variable(tf.random_normal([n_hidden_1])),
    'b2': tf.Variable(tf.random_normal([n_hidden_2])),
    'out': tf.Variable(tf.random_normal([num_output_neurons]))
}

# Create model
def neural_net(input_placeholder):
    # Hidden fully connected layer 
    layer_1 = tf.add(tf.matmul(input_placeholder, weights['h1']), biases['b1'])
    # Hidden fully connected layer
    layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
    # Output fully connected layer with a neuron for each pixel
    out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
    return out_layer

# Construct model
logits = neural_net(input_placeholder)

# Define loss operation and optimizer
loss_operation = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = output_placeholder))
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate)
train_operation = optimizer.minimize(loss_operation)

# Evaluate model (with test logits, for dropout to be disabled)
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(output_placeholder, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()

# Start Training
with tf.Session() as sess:

  # Run the initializer
  sess.run(init)

  for step in range(1, training_iterations + 1): 

    sess.run(train_operation, feed_dict = {X: stimuli, Y: frames})

    if iteration % display_iteration == 0 or iteration == 1:

      loss, accuracy = sess.run([loss_operation, accuracy_operation], feed_dict = {X: stimuli, Y: frames})

      print("Step " + str(iteration) + 
            ", Loss = " + "{:.4f}".format(loss) + 
            ", Training Accuracy= " + \
            "{:.3f}".format(acc))

  print("Optimization finished!")

我认为这与我如何构造数据或将其提供给运行功能有关。

这是我遇到的错误:

ValueError                                Traceback (most recent call last)

<ipython-input-420-7517598734d6> in <module>()
      6   for step in range(1, training_iterations + 1):
      7 
----> 8     sess.run(train_operation, feed_dict = {X: stimuli, Y: frames})
      9 
     10     if iteration % display_iteration == 0 or iteration == 1:

1 frames

/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
   1147                              'which has shape %r' %
   1148                              (np_val.shape, subfeed_t.name,
-> 1149                               str(subfeed_t.get_shape())))
   1150           if not self.graph.is_feedable(subfeed_t):
   1151             raise ValueError('Tensor %s may not be fed.' % subfeed_t)

ValueError: Cannot feed value of shape (1, 10) for Tensor 'Placeholder_6:0', which has shape '(?, 1)'

如何确保我正确格式化输入数据并相应地形成网络?

1 个答案:

答案 0 :(得分:1)

结果是我具有向后生成的数组的尺寸:

stimuli = np.random.uniform(stimuli_value_low, stimuli_value_high, (stimuli_dimension, num_frames))
frames = np.random.uniform(pixel_value_low, pixel_value_high, (frame_dimension, num_frames))

应为:

stimuli = np.random.uniform(stimuli_value_low, stimuli_value_high, (num_frames, stimuli_dimension))
frames = np.random.uniform(pixel_value_low, pixel_value_high, (num_frames, frame_dimension))