我正在尝试将RGB图像从模拟器传递到我的自定义神经网络中。在RGB生成(模拟器)的来源处,RGB图像的尺寸为(3,144,256)
。
这是我构建神经网络的方式:
rgb_model = Sequential()
rgb = env.shape() // this is (3, 144, 256)
rgb_shape = (1,) + rgb
rgb_model.add(Conv2D(96, (11, 11), strides=(3, 3), padding='valid', activation='relu', input_shape=rgb_shape, data_format = "channels_first"))
Now, my rbg_shape is (1, 3, 144, 256).
这是我得到的错误:
rgb_model.add(Conv2D(96, (11, 11), strides=(3, 3), padding='valid', activation='relu', input_shape=rgb_kshape, data_format = "channels_first"))
File "/usr/local/lib/python2.7/dist-packages/keras/engine/sequential.py", line 166, in add
layer(x)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/base_layer.py", line 414, in call
self.assert_input_compatibility(inputs)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/base_layer.py", line 311, in assert_input_compatibility
str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=5
为什么我的实际尺寸为4时,keras抱怨预期尺寸为5?
P.S:我有与this问题相同的问题。理想情况下,我想评论该帖子,但信誉不高。
编辑:
以下是处理错误的代码:
rgb_shape = env.rgb.shape
rgb_model = Sequential()
rgb_model.add(Conv2D(96, (11, 11), strides=(3, 3), padding='valid', activation='relu', input_shape=rgb_shape, data_format = "channels_first"))
rgb_model.add(Conv2D(128, (3, 3), strides=(2, 2), padding='valid', activation='relu', data_format = "channels_first" ))
rgb_model.add(Conv2D(384, (3, 3), strides=(1, 1), padding='valid', activation='relu', data_format = "channels_first"))
rgb_model.add(Conv2D(384, (3, 3), strides=(1,1), padding='valid', activation='relu', data_format = "channels_first"))
rgb_model.add(Conv2D(256, (3,3), strides=(1,1), padding='valid', activation='relu', data_format = "channels_first"))
rgb_model.add(Flatten())
rgb_input = Input(shape=rgb_shape)
rgb = rgb_model(rgb_input)
这是我在env.rgb.shape
中以input_shape
身份通过Conv2D
时的新错误:
dqn.fit(env, callbacks=callbacks, nb_steps=250000, visualize=False, verbose=0, log_interval=100)
File "/usr/local/lib/python2.7/dist-packages/rl/core.py", line 169, in fit
action = self.forward(observation)
File "/usr/local/lib/python2.7/dist-packages/rl/agents/dqn.py", line 228, in forward
q_values = self.compute_q_values(state)
File "/usr/local/lib/python2.7/dist-packages/rl/agents/dqn.py", line 69, in compute_q_values
q_values = self.compute_batch_q_values([state]).flatten()
File "/usr/local/lib/python2.7/dist-packages/rl/agents/dqn.py", line 64, in compute_batch_q_values
q_values = self.model.predict_on_batch(batch)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1276, in predict_on_batch
x, _, _ = self._standardize_user_data(x)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 754, in _standardize_user_data
exception_prefix='input')
File "/usr/local/lib/python2.7/dist-packages/keras/engine/training_utils.py", line 126, in standardize_input_data
'with shape ' + str(data_shape))
ValueError: Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (1, 1, 3, 144, 256)
答案 0 :(得分:1)
Conv2D图层的输入形状为(num_channels, width, height)
。因此,您不应添加其他尺寸(实际上,输入形状为(batch_size, num_channels, width, height)
,但无需在此处设置batch_size
;它将通过fit
方法进行设置)。只需将input_shape=env.shape
传递给Conv2D
,就可以正常工作。
编辑:
为什么要定义Input
层并将其传递给模型?那不是它的工作原理。首先,您需要使用compile
方法编译模型,然后使用fit
方法在训练数据上训练模型,然后使用predict
方法进行预测。我强烈建议您阅读official guide,以了解这些工作原理。