tensorflow,图像分割convnet InvalidArgumentError:重塑的输入是一个具有28800000个值的张量,但请求的形状有57600

时间:2018-05-27 13:50:38

标签: python image-processing tensorflow conv-neural-network image-segmentation

我正在尝试分割来自BRATS挑战的图像。我在这两个存储库的组合中使用U-net:

https://github.com/zsdonghao/u-net-brain-tumor

https://github.com/jakeret/tf_unet

当我尝试输出预测统计数据时,会出现不匹配形状错误:

  

InvalidArgumentError:重塑的输入是一个28800000的张量   值,但请求的形状有57600 [[节点:Reshape_2 =   重塑[T = DT_FLOAT,Tshape = DT_INT32,   _device =“/ job:localhost / replica:0 / task:0 / device:CPU:0”](_ arg_Cast_0_0,Reshape_2 / shape)]]

我正在使用图像切片240x240,其中batch_verification_size = 500

然后,

  • 这是形状test_x:(500,240,240,1)
  • 这是形状test_y:(500,240,240,1)
  • 这是形状测试x:(500,240,240,1)
  • 这是形状测试y:(500,240,240,1)
  • 这是形状批x:(500,240,240,1)
  • 这是形状批y:(500,240,240,1)
  • 这是形状预测:(500,240,240,1)
  • 这是成本:Tensor(“add_88:0”,shape =(),dtype = float32)
  • 这是成本:Tensor(“Mean_2:0”,shape =(),dtype = float32)
  • 这是形状预测:(?,?,?,1)
  • 这是形状批x:(500,240,240,1)
  • 这是形状批y:(500,240,240,1)

240 x 240 x 500 = 28800000 我不知道为什么要求57600

看起来错误来自output_minibatch_stats函数:

summary_str, loss, acc, predictions = sess.run([self.summary_op, 
                                                self.net.cost, self.net.accuracy, 
self.net.predicter], 
feed_dict={self.net.x: batch_x,
self.net.y: batch_y,
self.net.keep_prob: 1.})

因此sess.run tf函数出错了。下面是一些错误出现的代码。有人知道会发生什么吗?

def store_prediction(self, sess, batch_x, batch_y, name):
    print('track 1')
            prediction = sess.run(self.net.predicter, feed_dict={self.net.x: batch_x, 
                                                                 self.net.y: batch_y, 
                                                                 self.net.keep_prob: 1.})
            print('track 2')
            pred_shape = prediction.shape



loss = sess.run(self.net.cost, feed_dict={self.net.x: batch_x, 
                                                       self.net.y: batch_y, `
                                                       self.net.keep_prob: 1.})
        print('track 3')
        logging.info("Verification error= {:.1f}%, loss= {:.4f}".format(error_rate(prediction,
                                                                          util.crop_to_shape(batch_y,
                                                                                             prediction.shape)),
                                                                          loss))
        print('track 4')
        print('this is shape batch x: ' + str(batch_x.shape))
        print('this is shape batch y: ' + str(batch_y.shape))
        print('this is shape prediction: ' + str(prediction.shape))
        #img = util.combine_img_prediction(batch_x, batch_y, prediction)
        print('track 5')
        #util.save_image(img, "%s/%s.jpg"%(self.prediction_path, name))

        return pred_shape

    def output_epoch_stats(self, epoch, total_loss, training_iters, lr):
        logging.info("Epoch {:}, Average loss: {:.4f}, learning rate: {:.4f}".format(epoch, (total_loss / training_iters), lr))

    def output_minibatch_stats(self, sess, summary_writer, step, batch_x, batch_y):
        print('this is shape cost : ' + str(self.net.cost.shape))
        print('this is cost : ' + str(self.net.cost))
        print('this is  acc : ' + str(self.net.accuracy.shape))
        print('this is cost : ' + str(self.net.accuracy))
        print('this is shape prediction: ' + str(self.net.predicter.shape))
        print('this is shape batch x: ' + str(batch_x.shape))
        print('this is shape batch y: ' + str(batch_y.shape))


        # Calculate batch loss and accuracy
        summary_str, loss, acc, predictions = sess.run([self.summary_op, 
                                                            self.net.cost, 
                                                            self.net.accuracy, 
                                                            self.net.predicter], 
                                                           feed_dict={self.net.x: batch_x,
                                                                      self.net.y: batch_y,
                                                                      self.net.keep_prob: 1.})
        print('track 6')
        summary_writer.add_summary(summary_str, step)
        print('track 7')
        summary_writer.flush()
        logging.info("Iter {:}, Minibatch Loss= {:.4f}, Training Accuracy= {:.4f}, Minibatch error= {:.1f}%".format(step,
                                                                                                            loss,
                                                                                                            acc,
                                                                                                            error_rate(predictions, batch_y)))
        print('track 8')

1 个答案:

答案 0 :(得分:1)

在训练期间,您在张量流管道中将批量大小设置为1,但在测试数据中输入500个批量大小。这就是为什么网络只要求张量为57600的张量。 您可以将培训批次大小设置为500或将批次大小设置为1。