这适用于1或2个分层的神经网络,但在更深的神经网络上,给定输入与从输出计算得到的输入之间有很大差异。
这是我的神经网络实现:
input_dim = 256
LR = 0.25
# create encoder model
input_plain = Input(shape=(input_dim,))
encoded = Dense(input_dim, use_bias=False)(input_plain)
encoded = LeakyReLU(LR)(encoded)
encoded = Dense(input_dim, use_bias=False)(encoded)
encoded = LeakyReLU(LR)(encoded)
encoded = Dense(input_dim, use_bias=False)(encoded)
encoded = LeakyReLU(LR)(encoded)
encoded = Dense(input_dim, use_bias=False)(encoded)
encoded = LeakyReLU(LR)(encoded)
encoded = Dense(input_dim, use_bias=False)(encoded)
encoded = LeakyReLU(LR)(encoded)
encoded = Dense(input_dim, use_bias=False)(encoded)
encoded = LeakyReLU(LR)(encoded)
encoder = Model(input_plain, encoded)
encoded = encoder.predict(x_test)
此反泄漏relu函数:
def LeakyReLU_inv(alpha,x):
output = np.copy(x)
output[ output < 0 ] /= alpha
return output
这就是我从输出中获取原始输入的方式:
encoder_weights= encoder.get_weights()
decoder_weights = []
for w in encoder_weights:
decoder_weights.append((np.linalg.inv(w)))
decoder_weights.reverse()
x = encoded
for w in decoder_weights:
x = LeakyReLU_inv(LR,x)
x = np.dot(x,w)
我建立了一个具有两层的较小的神经网络,并实现了相同的逻辑,并且有效:
input_plain = Input(shape=(3,))
encoded = Dense(3, use_bias=False)(input_plain)
encoded = LeakyReLU(0.25)(encoded)
encoded = Dense(3, use_bias=False)(encoded)
encoded = LeakyReLU(0.25)(encoded)
encoder = Model(input_plain, encoded)
W1 = encoder.get_weights()[0]
W2 = encoder.get_weights()[1]
Z1 = np.dot(X,W1)
Y_calc1 = LeakyReLU_(0.25,Z1)
Z2 = np.dot(Y_calc1,W2)
Y_calc2 = LeakyReLU_(0.25,Z2)
Y_calc2_inv = LeakyReLU_inv(0.25,Y)
Z_inv2 = np.dot(Y_calc2_inv,np.linalg.inv(W2))
Y_calc1_inv = LeakyReLU_inv(0.25,Z_inv2)
x= np.dot(Y_calc1_inv,np.linalg.inv(W1))
请注意,我已经实现了LeakyReLU_,如下所示:
def LeakyReLU_(alpha,x):
output = np.copy(x)Y_calc1
output[ output < 0 ] *= alpha
return output
我在第一个更深层的神经网络中做错了什么,这些错误的计算输入像两层神经网络一样不正确?
谢谢!
答案 0 :(得分:0)
要实现您想要的目标,这是太多的工作。我敢打赌,您正在寻找的是Autoencoder
。自动编码器用于在输出层生成完全相同的输入,同时通过一组编码和解码层处理输入。
这个想法是在编码层的末端减小输入的尺寸,并且仍然使用尺寸减小的张量来以最小的信息损失有效地重构输出层的输入。
以下是我为在输出层重建输入图像而构建的自动编码器。
def autoencoder(inputs):
# encoder
# 32 x 32 x 1 -> 16 x 16 x 64
# 16 x 16 x 64 -> 8 x 8 x 32
# 8 x 8 x 32 -> 4 x 4 x 16
# 4 x 4 x 16 -> 1 x 1 x 100
conv1 = lays.conv2d(inputs, 64, [5, 5], stride=2, padding='SAME')
conv2 = lays.conv2d(conv1, 32, [5, 5], stride=2, padding='SAME')
conv3 = lays.conv2d(conv2, 16, [5, 5], stride=2, padding='SAME')
conv4 = lays.conv2d(conv3, 100, [5, 5], stride=4, padding='SAME')
# decoder
# 1 x 1 x 100 -> 4 x 4 x 16
# 4 x 4 x 16 -> 16 x 16 x 32
# 16 x 16 x 32 -> 32 x 32 x 64
# 32 x 32 x 64 -> 64 x 64 x 1
# dconv1 = lays.conv2d_transpose(conv4, 16, [5, 5], stride=4, padding='SAME')
latent_ph = tf.placeholder_with_default(conv4, [None, 1, 1, 100], name="latent_ph")
dconv1 = lays.conv2d_transpose(latent_ph, 16, [5, 5], stride=4, padding='SAME')
dconv2 = lays.conv2d_transpose(dconv1, 32, [5, 5], stride=2, padding='SAME')
dconv3 = lays.conv2d_transpose(dconv2, 64, [5, 5], stride=2, padding='SAME')
dconv4 = lays.conv2d_transpose(dconv3, 1, [5, 5], stride=2, padding='SAME', activation_fn=tf.nn.relu)
# W_conv1 = weights([5, 5, 1, 64])
# conv1 = conv2d(inputs, W_conv1, stride=(2,2))
return dconv4, latent_ph, conv4
tf.placeholder_with_default()
允许您在测试时从外部输入张量。因此,如果您使用简化的输入格式,则可以将简化的张量输入tf.placeholder_with_default()
张量并观察输出。
回到您的问题,这是一个深层网络,一个浅层网络,一个CNN,一个完全连接的NN都无关紧要,当您实现这些自动编码器之一时,它应该可以工作。您唯一要做的更改就是进行labels = inputs
。