我想用4个卷积运算训练一个卷积网络,其中2个滤波器共享权重,但范数在两个滤波器元素之间保持为1。假设我有输入矩阵A和B,以及过滤器C和D。我要进行的操作是:
M1 = tf.conv2d(A,C)
M2 = tf.conv2d(B,C)
M3 = tf.conv2d(A,D)
M4 = tf.conv2d(B,D)
同时,我需要sqrt(C ^ 2 + D ^ 2)= 1
我已经找到了一种方法,可以通过使用同一层两次来在不同的卷积操作之间共享权重,如上一个问题中所述 How to share convolution kernels between layers in keras?。
但是我不知道如何将规范的约束条件表述为1。
谢谢!
我试图引入一个输入层,该层将通过具有内核过滤器尺寸的Dense层进行训练,然后在卷积运算之前使用cos(x)sin(x)将其重整并分为2个已在代码中进行此操作以调制输入图像)。然后,我使用手动的tf.nn.conv2d()操作。但是使用内核时,批次的维度为0维,这与内核所需的维[filter_height,filter_width,in_channels,out_channels]不兼容。挤压它是行不通的。
conv2d_layer_real= Conv2D(1,data_Mat2.shape[1],padding='same',kernel_constraint=max_norm(1),use_bias =False)
conv2d_layer_imag = Conv2D(1,data_Mat2.shape[1],padding='same',kernel_constraint=max_norm(1),use_bias =False)
input_shape = (data_Mat2.shape[1], data_Mat2.shape[1],1);
input_shape2 = (1,);
inputs_r = Input(shape=input_shape)
inputs_r2 = Input(shape=input_shape2)
phase_r2 = Dense(data_Mat2.shape[1]*data_Mat2.shape[1],activation = 'tanh',use_bias =False,kernel_initializer=RandomNormal(mean=0.0, stddev=0.5, seed=None))(inputs_r2)
phase_real = Lambda(lambda x:tf.cos(x*3.1416))(phase_r2)
phase_imag = Lambda(lambda x:tf.sin(x*3.1416))(phase_r2)
phase_real2 = Reshape((data_Mat2.shape[1], data_Mat2.shape[1],1))(phase_real)
phase_imag2 = Reshape((data_Mat2.shape[1], data_Mat2.shape[1],1))(phase_imag)
Mat_real = Multiply()([inputs_r,phase_real2])
Mat_imag = Multiply()([inputs_r,phase_imag2])
out_conv1 = conv2d_layer_real(Mat_real)
out_conv2 = conv2d_layer_real(Mat_imag)
out_conv3 = conv2d_layer_imag(Mat_real)
out_conv4 = conv2d_layer_imag(Mat_imag)
out_real = Add()([out_conv1,-out_conv4])
out_imag = Add()([out_conv2,out_conv3])
image_out = tf.complex(out_real,out_imag)
image_out = tf.square(tf.abs(image_out))
image_out = AveragePooling2D(pool_size=(pool_s, pool_s))(image_out)
vector_out = Reshape((9,))(image_out)
outputs = Softmax()(vector_out)
最后一个代码效果很好,但是因为conv2D层的宽度不规范,所以范数不为1,因为没有这样的约束