我的模型编译得很好,但是当我开始训练时它失败了:
InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [1,28,28,728] vs. shape[1] = [1,0,0,256]
[[Node: upblock3_concat/concat = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](block4_sepconv2_bn_1/cond/Merge, upblock3_crop/strided_slice, upblock3_concat/concat/axis)]]
[[Node: metrics/iou/Mean/_2825 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_17728_metrics/iou/Mean", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Caused by op 'upblock3_concat/concat', defined at:
File "cranes/train.py", line 134, in <module>
fire.Fire(train)
File "/home/karolzak/.local/share/virtualenvs/hackathon-prague-2018-Ywp4garX/lib/python3.5/site-packages/fire/core.py", line 127, in Fire
component_trace = _Fire(component, args, context, name)
File "/home/karolzak/.local/share/virtualenvs/hackathon-prague-2018-Ywp4garX/lib/python3.5/site-packages/fire/core.py", line 366, in _Fire
component, remaining_args)
File "/home/karolzak/.local/share/virtualenvs/hackathon-prague-2018-Ywp4garX/lib/python3.5/site-packages/fire/core.py", line 542, in _CallCallable
result = fn(*varargs, **kwargs)
File "cranes/train.py", line 48, in train
model = XUnet(xception_weights=None)((target_size, target_size, 3))
File "/home/karolzak/ric/hackathon-prague-2018/cranes/xunet.py", line 157, in __call__
x = layers.Concatenate(name=prefix + 'concat')([bridge, x])
File "/home/karolzak/.local/share/virtualenvs/hackathon-prague-2018-Ywp4garX/lib/python3.5/site-packages/keras/engine/topology.py", line 619, in __call__
output = self.call(inputs, **kwargs)
File "/home/karolzak/.local/share/virtualenvs/hackathon-prague-2018-Ywp4garX/lib/python3.5/site-packages/keras/layers/merge.py", line 155, in call
return self._merge_function(inputs)
File "/home/karolzak/.local/share/virtualenvs/hackathon-prague-2018-Ywp4garX/lib/python3.5/site-packages/keras/layers/merge.py", line 357, in _merge_function
return K.concatenate(inputs, axis=self.axis)
File "/home/karolzak/.local/share/virtualenvs/hackathon-prague-2018-Ywp4garX/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 1881, in concatenate
return tf.concat([to_dense(x) for x in tensors], axis)
File "/home/karolzak/.local/share/virtualenvs/hackathon-prague-2018-Ywp4garX/lib/python3.5/site-packages/tensorflow/python/ops/array_ops.py", line 1099, in concat
return gen_array_ops._concat_v2(values=values, axis=axis, name=name)
File "/home/karolzak/.local/share/virtualenvs/hackathon-prague-2018-Ywp4garX/lib/python3.5/site-packages/tensorflow/python/ops/gen_array_ops.py", line 706, in _concat_v2
"ConcatV2", values=values, axis=axis, name=name)
File "/home/karolzak/.local/share/virtualenvs/hackathon-prague-2018-Ywp4garX/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/home/karolzak/.local/share/virtualenvs/hackathon-prague-2018-Ywp4garX/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
op_def=op_def)
File "/home/karolzak/.local/share/virtualenvs/hackathon-prague-2018-Ywp4garX/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 1470, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
InvalidArgumentError (see above for traceback): ConcatOp : Dimensions of inputs should match: shape[0] = [1,28,28,728] vs. shape[1] = [1,0,0,256]
[[Node: upblock3_concat/concat = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](block4_sepconv2_bn_1/cond/Merge, upblock3_crop/strided_slice, upblock3_concat/concat/axis)]]
[[Node: metrics/iou/Mean/_2825 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_17728_metrics/iou/Mean", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
我的代码非常复杂,但有趣的部分是
bridge = SeparableConv2D(728, (3, 3), padding='same', use_bias=False, name='block4_sepconv2')(bridge)
# lots of things happening here
x = Cropping2D(cropping=(padding, padding), name=prefix + 'crop')(x)
x = layers.Concatenate(name=prefix + 'concat')([bridge, x])
如果我在layers.Concatenate
之前打开一个翻译,那么的形状似乎匹配:
In [1]: x
Out[1]: <tf.Tensor 'upblock3_crop/strided_slice:0' shape=(?, ?, ?, 256) dtype=float32>
In [2]: bridge
Out[2]: <tf.Tensor 'block4_sepconv2_bn_1/cond/Merge:0' shape=(?, 28, 28, 728) dtype=float32>
In [3]: K.int_shape(x)
Out[3]: (None, 28, 28, 256)
In [4]: K.int_shape(bridge)
Out[4]: (None, 28, 28, 728)
这里发生了什么?任何想法如何调试这个?如果这有所不同,我使用的是TensorFlow 1.4.0和Keras 2.1.5。
更新:模型摘要https://gist.github.com/ryszard/ad484ca39c8b650c72693d91b3abcbb8(粘贴在问题正文中太长了。)
更新2:
我将我的代码更改为:
x = Cropping2D(cropping=(padding, padding), name=prefix + 'crop')(x)
x_shape = K.int_shape(x)
#x = layers.Lambda(lambda x: K.reshape(x, (-1,) + x_shape[1:]))(x)
print('reshaping', x_shape[1:])
x = layers.Reshape(x_shape[1:])(x)
print('right before', K.int_shape(bridge), K.int_shape(x))
x = layers.Concatenate(name=prefix + 'concat')([bridge, x])
在那个版本中,我得到了
InvalidArgumentError (see above for traceback): Input to reshape is a tensor with 0 values, but the requested shape has 200704
[[Node: reshape_1/Reshape = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](upblock3_crop/strided_slice, reshape_1/Reshape/shape)]]
[[Node: metrics/acc/Mean_1/_2821 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_17769_metrics/acc/Mean_1", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
如果我使用lambda(在代码片段中注释)重新整形,我得到:
InvalidArgumentError (see above for traceback): ConcatOp : Dimensions of inputs should match: shape[0] = [1,28,28,728] vs. shape[1] = [0,28,28,256]
[[Node: upblock3_concat/concat = ConcatV2[N=2, T=DT_FLOAT, Tidx=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](block4_sepconv2_bn_1/cond/Merge, reshape_1/Reshape, upblock3_concat/concat/axis)]]
[[Node: metrics/iou/Mean/_2825 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_17782_metrics/iou/Mean", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
如果是layers.Lambda(lambda x: K.reshape(x, (1,) + x_shape[1:]))(x)
,那么Keras会抱怨:
ValueError: A `Concatenate` layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 28, 28, 728), (1, 28, 28, 256)]
答案 0 :(得分:0)
这个最小的例子有效。我确实有Tensorflow 1.6,我必须添加,不确定这是否有所作为。请在您的计算机上测试一下。
import tensorflow as tf
import keras as K
from keras.layers import *
prefix = 'test'
img = tf.placeholder(tf.float32, shape=(None, 32, 32, 784))
bridge = SeparableConv2D(728, (3, 3), padding='valid', use_bias=False, name='block4_sepconv2')( img )
padding = 1
x = Cropping2D( cropping=( padding, padding ), name= prefix + 'crop')( img )
print ( x )
print( x.get_shape(), K.int_shape( x ) )
x = Concatenate(name=prefix + 'concat')([ bridge, x ])
print ( x )
print( x.get_shape(), K.int_shape( x ) )
Tensor(&#34; testcrop / strided_slice:0&#34;,shape =(?,30,30,784),dtype = float32) (TensorShape([Dimension(None),Dimension(30),Dimension(30),Dimension(784)]),(None,30,30,784))
Tensor(&#34; testconcat / concat:0&#34;,shape =(?,30,30,1512),dtype = float32) (TensorShape([Dimension(None),Dimension(30),Dimension(30),Dimension(1512)]),(None,30,30,1512))
如果此示例不适合您,请升级张量流。如果是这样,我们将不得不检查如何使用未确定的H和W尺寸获得x的值。