如何用Image和单独的值作为输入来训练Keras模型?混合输入

时间:2017-10-23 14:36:51

标签: image-processing deep-learning keras reinforcement-learning

我正在为自动直升机建立强化学习代理。我的Keras(1.0.7)纯图像输入模型如下所示:

image_model = Sequential()
image_model.add(Convolution2D(32, 8, 8, subsample=(4, 4), input_shape=(1, 120, 215)))
image_model.add(Activation('relu'))
image_model.add(Convolution2D(64, 4, 4, subsample=(2, 2)))
image_model.add(Activation('relu'))
image_model.add(Convolution2D(64, 3, 3, subsample=(1, 1)))
image_model.add(Activation('relu'))
image_model.add(Flatten())
image_model.add(Dense(512))
image_model.add(Activation('relu'))
image_model.add(Dense(nb_actions))
image_model.add(Activation('linear'))

为了正确学习,除了纯粹的图像(方向,直升机的位置等)之外,我还必须将一些额外的值传递给我的模型。我想我必须使用网络架构流来产生一个单一输出层或几个输出层。

image_model = Sequential()
image_model.add(Convolution2D(32, 8, 8, subsample=(4, 4), input_shape=input_shape))
image_model.add(Activation('relu'))
image_model.add(Convolution2D(64, 4, 4, subsample=(2, 2)))
image_model.add(Activation('relu'))
image_model.add(Convolution2D(64, 3, 3, subsample=(1, 1)))
image_model.add(Activation('relu'))
image_model.add(Flatten())
image_model.add(Dense(512))
image_model.add(Activation('relu'))


value_model = Sequential()
value_model.add(Flatten(input_shape=values))
value_model.add(Dense(16))
value_model.add(Activation('relu'))
value_model.add(Dense(16))
value_model.add(Activation('relu'))
value_model.add(Dense(16))
value_model.add(Activation('relu'))



model = Sequential()

#merge together somehow

model.add(Dense(nb_actions))
model.add(Activation('linear'))

合并http://jsfiddle.net/BramG/f8ce1b0w/8/用于在我的理解中合并图像和图像。如何汇总这些不同类型的输入?

编辑:我在尝试我的意思。我想在每个时间步骤训练我的代理,只有一个图像和一个单独的值。由于我认为我不应该将单独的值与conv网络流中的图像一起传递,我希望为该值设置第二个流,然后将图像和值网络结合在一起。

INPUT_SHAPE = (119, 214)
WINDOW_LENGTH = 1

img_input = (WINDOW_LENGTH,) + INPUT_SHAPE

img = Convolution2D(32, 8, 8, subsample=(4, 4), activation='relu', input_shape=img_input)
img = Convolution2D(64, 4, 4, subsample=(2, 2), activation='relu', input_shape=img)
img = Convolution2D(64, 3, 3, subsample=(1, 1), activation='relu', input_shape=img)
img = Flatten(input_shape=img)
img = Dense(512, activation='relu', input_shape=img)


value_input = (1,2)
value = Flatten()(value_input)
value = Dense(16, activation='relu')(value)
value = Dense(16, activation='relu')(value)
value = Dense(16, activation='relu')(value)

actions = Dense(nb_actions, activation='linear')(img)(value)


model = Model([img_input, value_input], [actions])

img = Convolution2D(32, 8, 8, subsample=(4, 4), activation='relu', input_shape=img_input)img = Convolution2D(32, 8, 8, subsample=(4, 4), activation='relu')(img_input)  风格不起作用。

此外,我不知道如何在actions = Dense(nb_actions, activation='linear')(img)(value)

中汇总流

1 个答案:

答案 0 :(得分:0)

为此,您必须使用Sequential而不是Model类API。

不确定您要在此处实现的目标,我希望以下代码可以帮助您

inp = Input((1, 120, 215))
x = Convolution2D(32, 8, 8, subsample=(4, 4), activation='relu')(inp)
x = Convolution2D(64, 4, 4, subsample=(2, 2), activation='relu')(x)
x = Convolution2D(64, 3, 3, subsample=(1, 1), activation='relu')(x)
x = Flatten()(x)
x = Dense(512, activation='relu')(x)

x_a = Dense(nb_actions, name='a', activation='linear')(x)
x_b = Dense(nb_classes, activation='softmax', name='b')(x)

model = Model([inp], [x_a, x_b])
model.compile(Adam(lr=0.001), loss=['mse', 'categorical_crossentropy'], metrics=['accuracy'],
         loss_weights=[.0001, 1.]) #adjust loss-Weights 
model.fit(train_feat, [train_labels_a, train_labels_b], batch_size=batch_size, nb_epoch=3, 
         validation_data=(val_feat, [val_labels_a, val_labels_b]))

EDIT 如果您需要2个输入型号和1个输出,请尝试:

from keras.models import Sequential
from keras.layers import Dense, Concatenate

image_model = Sequential()
image_model.add(Convolution2D(32, 8, 8, subsample=(4, 4), input_shape=input_shape))
image_model.add(Activation('relu'))
image_model.add(Convolution2D(64, 4, 4, subsample=(2, 2)))
image_model.add(Activation('relu'))
image_model.add(Convolution2D(64, 3, 3, subsample=(1, 1)))
image_model.add(Activation('relu'))
image_model.add(Flatten())
image_model.add(Dense(512))
image_model.add(Activation('relu'))


value_model = Sequential()
value_model.add(Flatten(input_shape=values))
value_model.add(Dense(16))
value_model.add(Activation('relu'))
value_model.add(Dense(16))
value_model.add(Activation('relu'))
value_model.add(Dense(16))
value_model.add(Activation('relu'))

merged = Concatenate([image_model, value_model])

final_model = Sequential()
final_model.add(merged)
final_model.add(Dense(nb_actions, activation='linear'))