我开始进入Keras。
以前我编写了自己的生成器和网络,但这是我在将数据传递给神经网络之前预处理的问题。现在的任务是在keras中执行此操作。
我之前的模型看起来像这样:
input_shape = (64, 64, 3)
model = Sequential()
model.add(Lambda(lambda x: x / 255 - 0.5, input_shape=input_shape))
model.add(Convolution2D(24, 5, 5, border_mode='valid', subsample=(2, 2), W_regularizer=l2(0.001)))
model.add(Activation('relu'))
model.add(Convolution2D(36, 5, 5, border_mode='valid', subsample=(2, 2), W_regularizer=l2(0.001)))
model.add(Activation('relu'))
model.add(Convolution2D(48, 5, 5, border_mode='valid', subsample=(2, 2), W_regularizer=l2(0.001)))
model.add(Activation('relu'))
model.add(Convolution2D(64, 3, 3, border_mode='same', subsample=(2, 2), W_regularizer=l2(0.001)))
model.add(Activation('relu'))
model.add(Convolution2D(64, 3, 3, border_mode='valid', subsample=(2, 2), W_regularizer=l2(0.001)))
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dense(80, W_regularizer=l2(0.001)))
model.add(Dropout(0.5))
model.add(Dense(40, W_regularizer=l2(0.001)))
model.add(Dropout(0.5))
model.add(Dense(16, W_regularizer=l2(0.001)))
model.add(Dropout(0.5))
model.add(Dense(10, W_regularizer=l2(0.001)))
model.add(Dense(1, W_regularizer=l2(0.001)))
adam = Adam(lr=0.0001)
model.compile(optimizer=adam, loss='mse')
model.summary()
return model
这与我在调整图像大小之前将它们传递到网络中进行训练一起工作得非常好。因此,为了使我的网络能够从游戏中获取图像,调整大小需要在网络内部进行。但是已经创建了一个InputLayer,我可以从中调整图像大小,我得到一个 ValueError 我在网络内部没有太大变化,但它现在看起来像这样:
img_shape = (160, 320, 3)
inputLayer = InputLayer(input_shape=(None, 160, 320, 3))
normalize = Lambda(lambda x: x / 255 - 0.5, input_shape=img_shape)(inputLayer)
cropped = Cropping2D(cropping=((50, 20), (0, 0)), input_shape=(160, 320, 3))(normalize)
conv1 = Convolution2D(24, 5, 5, border_mode='valid', subsample=(2, 2), W_regularizer=l2(0.001))(cropped)
conv1_activ = Activation("relu")(conv1)
conv2 = Convolution2D(36, 5, 5, border_mode='valid', subsample=(2, 2), W_regularizer=l2(0.001))(conv1_activ)
conv2_activ = Activation("relu")(conv2)
conv3 = Convolution2D(48, 5, 5, border_mode='valid', subsample=(2, 2), W_regularizer=l2(0.001))(conv2_activ)
conv3_activ = Activation("relu")(conv3)
conv4 = Convolution2D(64, 3, 3, border_mode='same', subsample=(2, 2), W_regularizer=l2(0.001))(conv3_activ)
conv4_activ = Activation("relu")(conv4)
conv5 = Convolution2D(64, 3, 3, border_mode='valid', subsample=(2, 2), W_regularizer=l2(0.001))(conv4_activ)
conv5_activ = Activation("relu")(conv5)
flattened = Flatten()(conv5_activ)
fullyConnected1 = Dense(80, W_regularizer=l2(0.001))(flattened)
dropOut1 = Dropout(0.5)(fullyConnected1)
fullyConnected2 = Dense(40, W_regularizer=l2(0.001))(dropOut1)
dropOut2 = Dropout(0.5)(fullyConnected2)
fullyConnected3 = Dense(16, W_regularizer=l2(0.001))(dropOut2)
dropOut3 = Dropout(0.5)(fullyConnected3)
fullyConnected4 = Dense(10, W_regularizer=l2(0.001))(dropOut3)
fullyConnected5 = Dense(1, W_regularizer=l2(0.001))(fullyConnected4)
opt = Adam(lr=0.0001)
model = Model(inputs=normalize, outputs=fullyConnected5)
model.compile(optimizer=opt, loss="mse")
model.summary()
return model
调用该函数时的错误如下
ValueError: Layer lambda_1 was called with an input that isn't a symbolic tensor.
Received type: <class 'keras.engine.topology.InputLayer'>. Full input: [<keras.engine.topology.InputLayer object at 0x1213a6fd0>].
All inputs to the layer should be tensors.
我已经有了一个想法,用原始张量流来调整图像大小
Lambda(lambda image: ktf.image.resize_images(image, (64, 64)))(inputLayer)
所以唯一真正的问题是如何使用此ValueError实际完成此工作,我不知道该怎么做。谢谢你们
答案 0 :(得分:0)
尝试使用Input(img_shape)
代替InputLayer(input_shape=(None, 160, 320, 3))
。
您无法将“图层”传递给其他图层,您必须传递“张量”。
定义模型时每一行的逻辑是:
outputTensor = SomeLayer(blablabla)(inputTensor)
InputLayer
本身不是张量,但Input
是。
提示:
input_shape
传递给这些图层。定义Input(shape)
后,其他所有内容都会自动推断。 output_shape=...
。