我正在尝试在keras中编写一个自定义激活层。问题是,我尝试使用S型和relu激活功能来实现。这些示例在实践上是完全相同的,但是一个有效,而另一个无效。 工作示例是:
class ParamRelu(Layer):
def __init__(self, alpha, **kwargs):
super(ParamRelu, self).__init__(**kwargs)
self.alpha = K.cast_to_floatx(alpha)
def call(self, inputs):
return K.sigmoid(self.alpha * inputs) * inputs
def get_config(self):
config = {'alpha': float(self.alpha)}
base_config = super(ParamRelu, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
def compute_output_shape(self, input_shape):
return input_shape
def aafcnn(alpha_row):
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
x_train = x_train[:, :, :, np.newaxis] / 255.0
x_test = x_test[:, :, :, np.newaxis] / 255.0
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
model = Sequential()
model.add(Conv2D(filters=16, kernel_size=3, padding='same', input_shape=(28, 28, 1)))
model.add(ParamRelu(alpha=alpha_row[0]))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=32, kernel_size=3, padding='same'))
model.add(ParamRelu(alpha=alpha_row[1]))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=64, kernel_size=3, padding='same'))
model.add(ParamRelu(alpha=alpha_row[2]))
model.add(MaxPooling2D(pool_size=2))
model.add(Flatten())
model.add(Dense(50, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
model.fit(x_train, y_train, epochs=1, validation_split=0.1)
_, test_acc = model.evaluate(x_test, y_test)
print(test_acc)
alpha_matrix = np.random.rand(10, 3)
for i in range(10):
aafcnn(alpha_matrix[i])
这有效。 这不是:
class ParamRelu(Layer):
def __init__(self, alpha, **kwargs):
super(ParamRelu, self).__init__(**kwargs)
self.alpha = K.cast_to_floatx(alpha)
def call(self, inputs):
return K.max((self.alpha * inputs), 0)
def get_config(self):
config = {'alpha': float(self.alpha)}
base_config = super(ParamRelu, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
def compute_output_shape(self, input_shape):
return input_shape
def aafcnn(alpha_row):
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
x_train = x_train[:, :, :, np.newaxis] / 255.0
x_test = x_test[:, :, :, np.newaxis] / 255.0
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
model = Sequential()
model.add(Conv2D(filters=16, kernel_size=3, padding='same', input_shape=(28, 28, 1)))
model.add(ParamRelu(alpha=alpha_row[0]))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=32, kernel_size=3, padding='same'))
model.add(ParamRelu(alpha=alpha_row[1]))
model.add(MaxPooling2D(pool_size=2))
model.add(Conv2D(filters=64, kernel_size=3, padding='same'))
model.add(ParamRelu(alpha=alpha_row[2]))
model.add(MaxPooling2D(pool_size=2))
model.add(Flatten())
model.add(Dense(50, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
model.fit(x_train, y_train, epochs=1, validation_split=0.1)
_, test_acc = model.evaluate(x_test, y_test)
print(test_acc)
alpha_matrix = np.random.rand(10, 3)
for i in range(10):
aafcnn(alpha_matrix[i])
错误是:
ValueError: Input 0 of layer max_pooling2d is incompatible with the layer: expected ndim=4, found ndim=3. Full shape received: [28, 28, 16]
我尝试使用input_shape=(None, 28, 28, 1)
代替input_shape=(28, 28, 1)
,但是在这种情况下,错误变为:
ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=5. Full shape received: [None, None, 28, 28, 1]
我在做什么错了?
答案 0 :(得分:2)
问题是,在第二种情况下,带有以下行:
return K.max((self.alpha * inputs), 0)
您在axis=0
中将尺寸减小了一个。
因此max_pooling2d
将无法获得所需的4D输入。