使用Keras功能API时调整神经网络超参数

时间:2019-08-05 10:55:11

标签: python-3.x keras conv-neural-network

我有一个包含两个分支的神经网络。一个分支将输入输入到卷积神经网络。而其他分支是一个完全连接的层。我合并这两个分支,然后使用softmax获得输出。我不能使用顺序模型,因为它已被弃用,因此必须使用功能性API。 我想调整卷积神经网络分支的超参数。例如,我想弄清楚应该使用多少个卷积层。如果是顺序模型,我会使用for循环,但是由于我使用的是功能性API,因此我无法真正做到这一点。我已经附上了我的代码。谁能告诉我如何以一种智能的方式针对卷积数优化我的神经网络,而不是制作许多具有不同卷积层数的不同脚本。

建议,不胜感激。


i1 = Input(shape=(xtest.shape[1], xtest.shape[2]))

###Convolution branch
c1 = Conv1D(128*2, kernel_size=ksize,activation='relu',kernel_regularizer=keras.regularizers.l2(l2_lambda))(i1)
c1 = Conv1D(128*2, kernel_size=ksize, activation='relu',kernel_regularizer=keras.regularizers.l2(l2_lambda))(c1)
c1 = AveragePooling1D(pool_size=ksize)(c1)
c1 = Dropout(0.2)(c1)

c1 = Conv1D(128*2, kernel_size=ksize, activation='relu',kernel_regularizer=keras.regularizers.l2(l2_lambda))(c1)
c1 = AveragePooling1D(pool_size=ksize)(c1)
c1 = Dropout(0.2)(c1)

c1 = Flatten()(c1)

###fully connected branch
i2 = Input(shape=(5000, ))
c2 = Dense(64,  activation='relu',kernel_regularizer=keras.regularizers.l2(l2_lambda))(i2)
c2 = Dropout(0.1)(c2)


###concatenating the two branches
c = concatenate([c1, c2])

x = Dense(256, activation='relu', kernel_initializer='normal',kernel_regularizer=keras.regularizers.l2(l2_lambda))(c)
x = Dropout(0.25)(x)

###Output branch 
output = Dense(num_classes, activation='softmax')(x)

model = Model([i1, i2], [output])

model.summary()

对于顺序模型,我可以使用for循环,例如:


layers = [1,2,3,4,5]

b1 = Sequential()
b1.add(Conv1D(128*2, kernel_size=ksize,
                 activation='relu',
                 input_shape=( xtest.shape[1], xtest.shape[2]),
                 kernel_regularizer=keras.regularizers.l2(l2_lambda)))

for layer in layers:
    count = layer
    while count > 0:
        b1.add(Conv1D(128*2, kernel_size=ksize, activation='relu',kernel_regularizer=keras.regularizers.l2(l2_lambda)))
        count -= 1

b1.add(MaxPooling1D(pool_size=ksize))
b1.add(Dropout(0.2))

b1.add(Flatten())
b2 = Sequential()

b2.add(Dense(64, input_shape = (5000,), activation='relu',kernel_regularizer=keras.regularizers.l2(l2_lambda)))

for layer in layers:
    count = layer
    while count > 0:
    b2.add(Dense(64,, activation='relu',kernel_regularizer=keras.regularizers.l2(l2_lambda)))


model = Sequential()
model.add(Merge([b1, b2], mode = 'concat'))
model.add(Dense(256, activation='relu', kernel_initializer='normal',kernel_regularizer=keras.regularizers.l2(l2_lambda)))
model.add(Dropout(0.25))
model.add(Dense(num_classes, activation='softmax'))

model.compile(loss=keras.losses.categorical_crossentropy,
                  optimizer=keras.optimizers.Adam(),
                  metrics=['accuracy'])


2 个答案:

答案 0 :(得分:1)

您也可以使用功能性API动态设置模型结构。对于卷积分支,可以使用类似以下内容的

layer_shapes = (64, 64, 32)

for _ in layers:
    b1 = Conv1D(128*2, kernel_size=ksize, activation='relu', kernel_regularizer=keras.regularizers.l2(l2_lambda))(b1)

您只需要用相应的变量分配替换Sequential.add

答案 1 :(得分:1)

这是使用Keras Functional API的具有可变层数的模型的最小示例:

from keras.layers import Input, Conv2D, Dense, Dropout, Flatten, MaxPool2D
from keras.models import Model

def build_model(num_layers, input_shape, num_classes): 
  input = Input(shape=input_shape)
  x = Conv2D(32, (3, 3), activation='relu')(input)

  # Suppose you want to find out how many additional convolutional 
  # layers to add here.
  for _ in num_layers:
    x = Conv2D(32, (3, 3), activation='relu')(x)

  x = MaxPool2D((2, 2))(x)
  x = Flatten()(x)
  x = Dense(64, activation='relu')(x)
  x = Dropout(0.5)(x)
  x = Dense(num_classes, activation='softmax')(x)

  return Model(inputs=input, outputs=x)

model = build_model(num_layers=2, input_shape=(128, 128), num_classes=3)

我将按照以下步骤找出要使用的“中间”卷积层数:

  1. 训练几个将num_layers参数设置为各种值的模型。建立所有这些模型的代码完全相同,只是num_layers参数的值在不同的训练过程中会发生变化。
  2. 选择最值得关注的指标。

就是这样!

旁注:据我所知,Keras Sequential模型并未被弃用。