通过从Conv2D'中减去3而导致的负尺寸大小。

时间:2017-01-14 15:27:54

标签: python tensorflow keras

我使用KerasTensorflow作为后端,这是我的代码:

import numpy as np
np.random.seed(1373) 
import tensorflow as tf
tf.python.control_flow_ops = tf

import os
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.utils import np_utils

batch_size = 128
nb_classes = 10
nb_epoch = 12


img_rows, img_cols = 28, 28

nb_filters = 32

nb_pool = 2

nb_conv = 3


(X_train, y_train), (X_test, y_test) = mnist.load_data()

print(X_train.shape[0])

X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)


X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255


print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')


Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)

model = Sequential()

model.add(Convolution2D(nb_filters, nb_conv, nb_conv,
border_mode='valid',
input_shape=(1, img_rows, img_cols)))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, nb_conv, nb_conv))
model.add(Activation('relu'))

model.add(MaxPooling2D(pool_size=(nb_pool, nb_pool)))
model.add(Dropout(0.25))

model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes)) 
model.add(Activation('softmax')) 

model.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=["accuracy"])


model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1, validation_data=(X_test, Y_test))

score = model.evaluate(X_test, Y_test, verbose=0)

print('Test score:', score[0])
print('Test accuracy:', score[1])

和引用错误:

Using TensorFlow backend.
60000
('X_train shape:', (60000, 1, 28, 28))
(60000, 'train samples')
(10000, 'test samples')
Traceback (most recent call last):
  File "mnist.py", line 154, in <module>
    input_shape=(1, img_rows, img_cols)))
  File "/usr/local/lib/python2.7/dist-packages/keras/models.py", line 276, in add
    layer.create_input_layer(batch_input_shape, input_dtype)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 370, in create_input_layer
    self(x)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 514, in __call__
    self.add_inbound_node(inbound_layers, node_indices, tensor_indices)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 572, in add_inbound_node
    Node.create_node(self, inbound_layers, node_indices, tensor_indices)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/topology.py", line 149, in create_node
    output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0]))
  File "/usr/local/lib/python2.7/dist-packages/keras/layers/convolutional.py", line 466, in call
    filter_shape=self.W_shape)
  File "/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py", line 1579, in conv2d
    x = tf.nn.conv2d(x, kernel, strides, padding=padding)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_nn_ops.py", line 396, in conv2d
    data_format=data_format, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 759, in apply_op
    op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2242, in create_op
    set_shapes_for_outputs(ret)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1617, in set_shapes_for_outputs
    shapes = shape_func(op)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1568, in call_with_requiring
    return call_cpp_shape_fn(op, require_shape_fn=True)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 610, in call_cpp_shape_fn
    debug_python_shape_fn, require_shape_fn)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/common_shapes.py", line 675, in _call_cpp_shape_fn_impl
    raise ValueError(err.message)
ValueError: Negative dimension size caused by subtracting 3 from 1 for 'Conv2D' (op: 'Conv2D') with input shapes: [?,1,28,28], [3,3,28,32].

首先我看到一些问题与Tensorflow版本有关,所以我将Tensorflow升级到0.12.0,但仍然存在,是网络问题还是我遗漏了什么,应该{ {1}}看起来像?

更新 这是input_shape

./keras/keras.json

8 个答案:

答案 0 :(得分:70)

您的问题来自image_ordering_dim中的keras.json

来自Keras Image Processing doc

  

dim_ordering:{&#34; th&#34;,&#34; tf&#34;}之一。 &#34; TF&#34;模式意味着图像应具有形状(样本,高度,宽度,通道),&#34; th&#34;模式意味着图像应具有形状(样本,通道,高度,宽度)。它默认为在〜/ .keras / keras.json的Keras配置文件中找到的image_dim_ordering值。如果你从未设置它,那么它将是&#34; tf&#34;。

Keras将卷积运算映射到所选择的后端(theano或tensorflow)。但是,两个后端都对维度的排序做出了不同的选择。如果您的图像批次是具有C通道的HxW大小的N个图像,则theano使用NCHW排序,而tensorflow使用NHWC排序。

Keras允许您选择您喜欢的顺序,并进行转换以映射到后面的后端。但是如果你选择image_ordering_dim="th"它会期望Theano风格的排序(NCHW,你的代码中有一个),如果image_ordering_dim="tf"它需要tensorflow风格排序(NHWC)。

由于您的image_ordering_dim设置为"tf",如果您将数据重新整形为张量流样式,它应该有效:

X_train = X_train.reshape(X_train.shape[0], img_cols, img_rows, 1)
X_test = X_test.reshape(X_test.shape[0], img_cols, img_rows, 1)

input_shape=(img_cols, img_rows, 1)

答案 1 :(得分:19)

FWIW,我用一些strides或kernel_size值反复得到了这个错误,但是并不是全部,后端和image_ordering已经设置为tensorflow,当我添加padding="same"

答案 2 :(得分:18)

只需添加:

from keras import backend as K
K.set_image_dim_ordering('th')

答案 3 :(得分:2)

我遇到了同样的问题,但是通过更改conv2d函数解决了该问题:

`
if K.image_data_format=='channels_first':
    x_train = x_train.reshape(x_train.shape[0], 1,img_cols,img_rows)
    x_test = x_test.reshape(x_test.shape[0], 1,img_cols,img_rows)
    input_shape = (1,img_cols,img_rows)
else:
    x_train = x_train.reshape(x_train.shape[0],img_cols,img_rows,1)
    x_test = x_test.reshape(x_test.shape[0],img_cols,img_rows,1)
    input_shape = (img_cols,img_rows,1)
model.add(Convolution2D(32,(3, 3), input_shape = input_shape, activation="relu"))
`

答案 4 :(得分:0)

我也有同样的问题。但是,我正在使用的每个Conv3D层都在减小输入的大小。因此,在声明Conv2D / 3D层时包含一个参数padding ='same'解决了该问题。这是演示代码

model.add(Conv3D(32,kernel_size=(3,3,3),activation='relu',padding='same'))

减小过滤器的尺寸也可以解决问题。

答案 5 :(得分:0)

使用括号提供过滤器的大小,例如:

model.add(Convolution2D(nb_filters,( nb_conv, nb_conv) ,border_mode='valid',
input_shape=(1, img_rows, img_cols)))

在我的情况下,它可以正常工作,并且还可以这样更改X_train和X_test:

X_train = X_train.reshape(X_train.shape[0], img_cols, img_rows, 1)
X_test = X_test.reshape(X_test.shape[0], img_cols, img_rows, 1)

答案 6 :(得分:0)

另一个可以帮助您解决问题的解决方案:

from keras.layers import Convolution2D, MaxPooling2D

from keras.layers import Conv2D, MaxPooling2D

然后,要运行预处理输入数据,请更改:

X_train = X_train.reshape(X_train.shape[0], 1, 28, 28)
X_test = X_test.reshape(X_test.shape[0], 1, 28, 28)

X_train = X_train.reshape(X_train.shape[0], 28, 28, 1)
X_test.reshape(X_test.shape[0], 28, 28, 1)

最后,我更改:

model.add(Convolution2D(32, 3, 3, activation='relu',input_shape=(1,28,28))) 
model.add(Convolution2D(32, 3, 3,activation='relu'))

model.add(Conv2D(32, (3, 3), activation='relu',input_shape=(28,28,1)))
model.add(Conv2D(32, (3, 3), activation='relu'))

答案 7 :(得分:-1)

render() {
        return (
            <div>
                <RecentlyOpened />
            </div>
        );
    }