与输入形状尺寸有关的MRI分割错误(conv2d层的输入0与该层不兼容)

时间:2020-08-25 13:29:44

标签: python deep-learning reshape image-segmentation mri

我正在尝试使用深度学习模型执行MRI分割,但出现与图像尺寸有关的错误,不确定原因。

import numpy as np
import nibabel as nib
import matplotlib.pyplot as plt
img = nib.load('/content/drive/My Drive/Programa2/P1_FL_final.nii.gz')
%matplotlib inline

img_np = img.get_fdata()
print(type(img_np),img_np.shape)

#Plotting slice of the image
img_slice= img.get_fdata()[:,:,20]
plt.imshow(img_slice,cmap='gray')

#Make prediction
img_analised=img_np
#img_analised=img_np[:,:,:] I was trying to change dimensions
print(img_analised.shape) #Image shape (480, 512, 30)
newmodel.predict(img_analised)

错误消息

ValueError: Input 0 of layer conv2d is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: [32, 512, 30]

1 个答案:

答案 0 :(得分:0)

问题与输入图像的形状有关,代码要求MRI的4种不同形式,而我使用的形式较少。当我更改它时,就可以了。