我想用
来规范CNN的层次|(W^T * W - I)|
我怎么能在Keras那样做?
答案 0 :(得分:3)
来自文档:
任何接受权重矩阵并返回损失的函数 贡献张量可以用作正则化因子
以下是实施的示例:
from keras import backend as K
def l1_reg(weight_matrix):
return 0.01 * K.sum(K.abs(weight_matrix))
model.add(Dense(64, input_dim=64,
kernel_regularizer=l1_reg)
你的帖子中的损失将是:
from keras import backend as K
def fro_norm(w):
return K.sqrt(K.sum(K.square(K.abs(w))))
def cust_reg(w):
m = K.dot(K.transpose(w), w) - np.eye(w.shape)
return fro_norm(m)
这是一个最小的例子:
import numpy as np
from keras import backend as K
from keras.models import Sequential
from keras.layers import Dense, Activation
X = np.random.randn(100, 100)
y = np.random.randint(2, size=(100, 1))
model = Sequential()
# apply regularization here. applies regularization to the
# output (activation) of the layer
model.add(Dense(32, input_shape=(100,),
activity_regularizer=fro_norm))
model.add(Dense(1))
model.add(Activation('softmax'))
model.compile(loss="binary_crossentropy",
optimizer='sgd',
metrics=['accuracy'])
model.fit(X, y, epochs=1, batch_size=32)
以下不会像@ Marcin的评论所暗示的那样起作用LA.norm不会起作用,因为正规制定者必须返回Tensor LA.norm()
没有。
def orth_norm(w)
m = K.dot(k.transpose(w), w) - np.eye(w.shape)
return LA.norm(m, 'fro')
from keras import backend as K
import numpy as np
def orth_norm(w)
m = K.dot(k.transpose(w), w) - np.eye(w.shape)
return LA.norm(m, 'fro')
答案 1 :(得分:0)
我认为对于卷积层,您可以在下面的代码中使用它,虽然效率不高,但我认为它可以起作用:
import keras.backend as K
import tensorflow as tf
def orthogonality_regularization(weight_matrix):
identity = K.eye(int(weight_matrix.shape[-1]))
orthogonality_reg_mat = []
for i in range(weight_matrix.shape[-1]):
for j in range(weight_matrix.shape[-1]):
orthogonality_reg_mat.extend([K.sum(tf.multiply(K.flatten(weight_matrix[:,:,:,i]), K.flatten(weight_matrix[:,:,:,j]))) - identity[i, j]])
orthogonality_reg = tf.linalg.norm(tf.convert_to_tensor(orthogonality_reg_mat))
return orthogonality_reg