无法将数据拟合到3d卷积U-net Keras

时间:2017-07-05 08:38:34

标签: machine-learning keras convolution unet

我有问题。我想制作3D卷积U-net。为此我正在使用Keras。

我的数据来自Data Science Bowl 2017 Competition的MRI图像。所有MRI都保存在numpy阵列中(所有像素从0缩放到1),形状为:

data_ch.shape
(94, 50, 50, 50, 1)

94 - 患者,50张50x50图像的MRI切片,1个通道: The patients MRI from dataset

我想制作3D Convolutional U-net,因此这个网络的输入和输出是相同的3d数组。 3D U-net:

input_img= Input(shape=(data_ch.shape[1], data_ch.shape[2], data_ch.shape[3], data_ch.shape[4]))
x=Conv3D(filters=8, kernel_size=(3, 3, 3), activation='relu', padding='same')(input_img)
x=MaxPooling3D(pool_size=(2, 2, 2), padding='same')(x)
x=Conv3D(filters=8, kernel_size=(3, 3, 3), activation='relu', padding='same')(x)
x=MaxPooling3D(pool_size=(2, 2, 2), padding='same')(x)

x=UpSampling3D(size=(2, 2, 2))(x)
x=Conv3D(filters=8, kernel_size=(3, 3, 3), activation='relu', padding='same')(x) # PADDING IS NOT THE SAME!!!!!
x=UpSampling3D(size=(2, 2, 2))(x)
x=Conv3D(filters=1, kernel_size=(3, 3, 3), activation='sigmoid')(x)

model=Model(input_img, x)
model.compile(optimizer='adadelta', loss='binary_crossentropy')

model.summary()
Layer (type)                 Output Shape              Param #   
=================================================================
input_5 (InputLayer)         (None, 50, 50, 50, 1)     0         
_________________________________________________________________
conv3d_27 (Conv3D)           (None, 50, 50, 50, 8)     224       
_________________________________________________________________
max_pooling3d_12 (MaxPooling (None, 25, 25, 25, 8)     0         
_________________________________________________________________
conv3d_28 (Conv3D)           (None, 25, 25, 25, 8)     1736      
_________________________________________________________________
max_pooling3d_13 (MaxPooling (None, 13, 13, 13, 8)     0         
_________________________________________________________________
up_sampling3d_12 (UpSampling (None, 26, 26, 26, 8)     0         
_________________________________________________________________
conv3d_29 (Conv3D)           (None, 26, 26, 26, 8)     1736      
_________________________________________________________________
up_sampling3d_13 (UpSampling (None, 52, 52, 52, 8)     0         
_________________________________________________________________
conv3d_30 (Conv3D)           (None, 50, 50, 50, 1)     217       
=================================================================
Total params: 3,913
Trainable params: 3,913
Non-trainable params: 0

但是,当我试图将数据拟合到这个网络时:

model.fit(data_ch, data_ch, epochs=1, batch_size=10, shuffle=True, verbose=1)

程序显示错误:

ValueError                                Traceback (most recent call last)
C:\Users\Taranov\Anaconda3\lib\site-packages\theano\compile\function_module.py in __call__(self, *args, **kwargs)
    883             outputs =\
--> 884                 self.fn() if output_subset is None else\
    885                 self.fn(output_subset=output_subset)

ValueError: CudaNdarray_CopyFromCudaNdarray: need same dimensions for dim 1, destination=13, source=14

During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
<ipython-input-26-b334d38d9608> in <module>()
----> 1 model.fit(data_ch, data_ch, epochs=1, batch_size=10, shuffle=True, verbose=1)

C:\Users\Taranov\Anaconda3\lib\site-packages\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, **kwargs)
   1496                               val_f=val_f, val_ins=val_ins, shuffle=shuffle,
   1497                               callback_metrics=callback_metrics,
-> 1498                               initial_epoch=initial_epoch)
   1499 
   1500     def evaluate(self, x, y, batch_size=32, verbose=1, sample_weight=None):

C:\Users\Taranov\Anaconda3\lib\site-packages\keras\engine\training.py in _fit_loop(self, f, ins, out_labels, batch_size, epochs, verbose, callbacks, val_f, val_ins, shuffle, callback_metrics, initial_epoch)
   1150                 batch_logs['size'] = len(batch_ids)
   1151                 callbacks.on_batch_begin(batch_index, batch_logs)
-> 1152                 outs = f(ins_batch)
   1153                 if not isinstance(outs, list):
   1154                     outs = [outs]

C:\Users\Taranov\Anaconda3\lib\site-packages\keras\backend\theano_backend.py in __call__(self, inputs)
   1156     def __call__(self, inputs):
   1157         assert isinstance(inputs, (list, tuple))
-> 1158         return self.function(*inputs)
   1159 
   1160 

C:\Users\Taranov\Anaconda3\lib\site-packages\theano\compile\function_module.py in __call__(self, *args, **kwargs)
    896                     node=self.fn.nodes[self.fn.position_of_error],
    897                     thunk=thunk,
--> 898                     storage_map=getattr(self.fn, 'storage_map', None))
    899             else:
    900                 # old-style linkers raise their own exceptions

C:\Users\Taranov\Anaconda3\lib\site-packages\theano\gof\link.py in raise_with_op(node, thunk, exc_info, storage_map)
    323         # extra long error message in that case.
    324         pass
--> 325     reraise(exc_type, exc_value, exc_trace)
    326 
    327 

C:\Users\Taranov\Anaconda3\lib\site-packages\six.py in reraise(tp, value, tb)
    683             value = tp()
    684         if value.__traceback__ is not tb:
--> 685             raise value.with_traceback(tb)
    686         raise value
    687 

C:\Users\Taranov\Anaconda3\lib\site-packages\theano\compile\function_module.py in __call__(self, *args, **kwargs)
    882         try:
    883             outputs =\
--> 884                 self.fn() if output_subset is None else\
    885                 self.fn(output_subset=output_subset)
    886         except Exception:

ValueError: CudaNdarray_CopyFromCudaNdarray: need same dimensions for dim 1, destination=13, source=14
Apply node that caused the error: GpuAlloc(GpuDimShuffle{0,2,x,3,4,1}.0, Shape_i{0}.0, TensorConstant{13}, TensorConstant{2}, TensorConstant{13}, TensorConstant{13}, TensorConstant{8})
Toposort index: 163
Inputs types: [CudaNdarrayType(float32, (False, False, True, False, False, False)), TensorType(int64, scalar), TensorType(int64, scalar), TensorType(int8, scalar), TensorType(int64, scalar), TensorType(int64, scalar), TensorType(int64, scalar)]
Inputs shapes: [(10, 14, 1, 14, 14, 8), (), (), (), (), (), ()]
Inputs strides: [(21952, 196, 0, 14, 1, 2744), (), (), (), (), (), ()]
Inputs values: ['not shown', array(10, dtype=int64), array(13, dtype=int64), array(2, dtype=int8), array(13, dtype=int64), array(13, dtype=int64), array(8, dtype=int64)]
Outputs clients: [[GpuReshape{5}(GpuAlloc.0, MakeVector{dtype='int64'}.0)]]

HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.

我尝试遵循建议并使用theano标志:

import theano
import os
os.environ["THEANO_FLAGS"] = "mode=FAST_RUN,device=gpu,floatX=float32, optimizer='None',exception_verbosity=high"

但它仍然不起作用。

你可以帮帮我吗? 非常感谢!

2 个答案:

答案 0 :(得分:0)

好的....这听起来很奇怪,但MaxPooling3Dpadding='same'有某种错误。所以我在没有它的情况下编写了代码,并添加了一个初始填充,以使您的尺寸兼容:

import keras.backend as K

inputShape = (data_ch.shape[1], data_ch.shape[2], data_ch.shape[3], data_ch.shape[4])
paddedShape = (data_ch.shape[1]+2, data_ch.shape[2]+2, data_ch.shape[3]+2, data_ch.shape[4])

#initial padding
input_img= Input(shape=inputShape)
x = Lambda(lambda x: K.spatial_3d_padding(x, padding=((1, 1), (1, 1), (1, 1))),
    output_shape=paddedShape)(input_img) #Lambda layers require output_shape

#your original code without padding for MaxPooling layers (replace input_img with x)
x=Conv3D(filters=8, kernel_size=3, activation='relu', padding='same')(x)
x=MaxPooling3D(pool_size=2)(x)
x=Conv3D(filters=8, kernel_size=3, activation='relu', padding='same')(x)
x=MaxPooling3D(pool_size=2)(x)

x=UpSampling3D(size=2)(x)
x=Conv3D(filters=8, kernel_size=3, activation='relu', padding='same')(x) # PADDING IS NOT THE SAME!!!!!
x=UpSampling3D(size=2)(x)
x=Conv3D(filters=1, kernel_size=3, activation='sigmoid')(x)

model=Model(input_img, x)
model.compile(optimizer='adadelta', loss='binary_crossentropy')
model.summary()
print(model.predict(data_ch)[1])
model.fit(data_ch,data_ch,epochs=1,verbose=2,batch_size=10)

答案 1 :(得分:0)

尝试将批量大小减小到2,如果你看到,你的网络需要更多的GPU,那么也尝试升级它。