TensorFlow错误:TensorShape()必须具有相同的排名

时间:2016-03-02 19:05:56

标签: python theano tensorflow

QTableItemWidgets

我正在运行一个激活函数,它计算def compileActivation(self, net, layerNum): variable = net.x if layerNum == 0 else net.varArrayA[layerNum - 1] #print tf.expand_dims(net.dropOutVectors[layerNum], 1) #print net.varWeights[layerNum]['w'].get_shape().as_list() z = tf.matmul((net.varWeights[layerNum]['w']), (variable * (tf.expand_dims(net.dropOutVectors[layerNum], 1) if self.dropout else 1.0))) + tf.expand_dims(net.varWeights[layerNum]['b'], 1) a = self.activation(z, self.pool_size) net.varArrayA.append(a) 并将其传递给sigmoid激活。 当我尝试执行上述函数时,出现以下错误:

z

计算ValueError: Shapes TensorShape([Dimension(-2)]) and TensorShape([Dimension(None), Dimension(None)]) must have the same rank 的theano等效工作正常:

z

1 个答案:

答案 0 :(得分:0)

米希尔,

当我遇到此问题时,原因是我的占位符在我的Feed字典中的大小不合适。此外,您应该知道如何在会话中运行图表。 tf.Session.run(fetches, feed_dict=None)

这是我制作placeholders

的代码
# Note this place holder is for the input data feed-dict definition
input_placeholder = tf.placeholder(tf.float32, shape=(batch_size, FLAGS.InputLayer))
# Not sure yet what this will be used for. 
desired_output_placeholder = tf.placeholder(tf.float32, shape=(batch_size, FLAGS.OutputLayer))

这是我的填充Feed字典功能:

def feel_feed_funct(data_sets_train, input_pl, output_pl):
  ti_feed, dto_feed = data_sets_train.next_batch(FLAGS.batch_size)

  feed_dict = {
    input_pl: ti_feed,
    output_pl: dto_feed
  }
  return feed_dict

后来我这样做了:

# Fill a feed dictionary with the actual set of images and labels
# for this particular training step.
feed_dict = fill_feed_dict(data_sets.train, input_placeholder, desired_output_placeholder)

然后运行会话并获取输出我有这一行

_, l = sess.run([train_op, loss], feed_dict=feed_dict)