我正在执行Mohammad Havaei撰写的this篇论文。它使用以下架构:
我修改了here中的一些代码来执行此操作。
print 'Compiling two-path model...'
#local pathway
modle_l=Sequential()
modle_l.add(Convolution2D(64,7,7,
border_mode='valid',W_regularizer=l1l2(l1=0.01, l2=0.01),
input_shape=(4,33,33)))
modle_l.add(Activation('relu'))
modle_l.add(BatchNormalization(mode=0,axis=1))
modle_l.add(MaxPooling2D(pool_size=(2,2),strides=(1,1)))
modle_l.add(Dropout(0.5))
#Add second convolution
modle_l.add(Convolution2D(64,3,3,
border_mode='valid',W_regularizer=l1l2(l1=0.01, l2=0.01),
input_shape=(4,33,33)))
modle_l.add(BatchNormalization(mode=0,axis=1))
modle_l.add(MaxPooling2D(pool_size=(4,4), strides=(1,1)))
modle_l.add(Dropout(0.5))
#global pathway
modelg = Sequential()
modelg.add(Convolution2D(160,12,12,
border_mode='valid', W_regularizer=l1l2(l1=0.01, l2=0.01),
input_shape=(self.n_chan,33,33)))
modelg.add(Activation('relu'))
modelg.add(BatchNormalization(mode=0, axis=1))
modelg.add(MaxPooling2D(pool_size=(2,2), strides=(1,1)))
modelg.add(Dropout(0.5))
# merge local and global pathways
merge= Sequential()
merge.add(Merge([modle_l,modelg], mode='concat',concat_axis=1))
merge.add(Convolution2D(5,21,21,
border_mode='valid',
W_regularizer=l1l2(l1=0.01, l2=0.01), input_shape=(4,33,33)))
# Flatten output of 5x1x1 to 1x5, perform softmax
merge.add(Flatten())
merge.add(Dense(5))
merge.add(Activation('softmax'))
sgd = SGD(lr=0.001, decay=0.01, momentum=0.9)
merge.compile(loss='categorical_crossentropy', optimizer='sgd')
print 'Done'
return merge
我已经使用了这种替代方法,因为在keras 1.0中不推荐使用Graph模型 我的问题是我现在如何训练模型? 我用它来训练
merge.fit(X_train, Y_train, batch_size=self.batch_size, nb_epoch=self.n_epoch, validation_split=0.1, show_accuracy=True, verbose=1)
如果我需要分别训练两层然后合并,我该怎么做?
答案 0 :(得分:1)
from keras.layers import *
from keras.models import Model
print 'Compiling two-path model...'
# Input of the model
input_model = Input(shape=(4,33,33))
# Local pathway
#Add first convolution
model_l = Convolution2D(64,7,7,
border_mode='valid',
activation='relu',
W_regularizer=l1l2(l1=0.01, l2=0.01))(input_model)
model_l = BatchNormalization(mode=0,axis=1)(model_l)
model_l = MaxPooling2D(pool_size=(2,2),strides=(1,1))(model_l)
model_l = Dropout(0.5)(model_l)
#Add second convolution
model_l = Convolution2D(64,3,3,
border_mode='valid',
W_regularizer=l1l2(l1=0.01, l2=0.01),
input_shape=(4,33,33))(model_l)
model_l = BatchNormalization(mode=0,axis=1)(model_l)
model_l = MaxPooling2D(pool_size=(4,4),strides=(1,1))(model_l)
model_l = Dropout(0.5)(model_l)
#global pathway
model_g = Convolution2D(160,12,12,
border_mode='valid',
activation='relu',
W_regularizer=l1l2(l1=0.01, l2=0.01))(input_model)
model_g = BatchNormalization(mode=0,axis=1)(model_g)
model_g = MaxPooling2D(pool_size=(2,2), strides=(1,1))(model_g)
model_g = Dropout(0.5)(model_g)
# merge local and global pathways
merge = Merge(mode='concat', concat_axis=1)([model_l,model_g])
merge = Convolution2D(5,21,21,
border_mode='valid',
W_regularizer=l1l2(l1=0.01, l2=0.01))(merge)
merge = Flatten()(merge)
predictions = Dense(5, activation='softmax')(merge)
model_merged = Model(input=input_model,output=predictions)
sgd = SGD(lr=0.001, decay=0.01, momentum=0.9)
model_merged.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
print('Done')
return model_merged
这相当于您发布但使用Functional API
定义的网络如您所见,只有一个输入图层,使用了两次。然后你可以像你说的那样训练它:
model_merged.fit(X_train, Y_train, batch_size=self.batch_size, nb_epoch=self.n_epoch, validation_split=0.1, verbose=1)
有帮助吗?