删除整个输入层

时间:2019-09-11 03:03:51

标签: python keras neural-network

假设我有两个输入(每个输入都有许多功能),我想将它们输入Dropout层中。我希望每次迭代都删除整个输入及其所有相关功能,并保留其他所有输入。

连接输入之后,我想我需要为noise_shape使用Dropout参数,但是连接层的形状实际上并不能让我这样做。对于形状为(15,)的两个输入,连接的形状为(None,30),而不是(None,15,2),因此其中一个轴丢失了,我无法沿其退出。

关于我可以做什么的任何建议?谢谢。

from keras.layers import Input, concatenate, Dense, Dropout

x = Input((15,))  # 15 features for the 1st input
y = Input((15,))  # 15 features for the 2nd input
xy = concatenate([x, y])
print(xy._keras_shape)
# (None, 30)

layer = Dropout(rate=0.5, noise_shape=[xy.shape[0], 1])(xy)
...

1 个答案:

答案 0 :(得分:1)

编辑:

似乎我误解了您的问题,这是根据您的要求更新的答案。

要实现所需的目标,x和y成为时间步长,根据Keras文档,如果您输入的形状为noise_shape=(batch_size, 1, features),则(batch_size, timesteps, features)

x = Input((15,1))  # 15 features for the 1st input
y = Input((15,1))  # 15 features for the 2nd input
xy = concatenate([x, y])

dropout_layer = Dropout(rate=0.5, noise_shape=[None, 1, 2])(xy)
...

要测试您的行为是否正确,可以使用以下代码(reference link)检查中间xy层和dropout_layer

### Define your model ###

from keras.layers import Input, concatenate, Dropout
from keras.models import Model
from keras import backend as K

# Learning phase must be set to 1 for dropout to work
K.set_learning_phase(1)

x = Input((15,1))  # 15 features for the 1st input
y = Input((15,1))  # 15 features for the 2nd input
xy = concatenate([x, y])

dropout_layer = Dropout(rate=0.5, noise_shape=[None, 1, 2])(xy)

model = Model(inputs=[x,y], output=dropout_layer)

# specify inputs and output of the model

x_inp = model.input[0]                                           
y_inp = model.input[1]
outp = [layer.output for layer in model.layers[2:]]        
functor = K.function([x_inp, y_inp], outp)

### Get some random inputs ###

import numpy as np

input_1 = np.random.random((1,15,1))
input_2 = np.random.random((1,15,1))

layer_outs = functor([input_1,input_2])
print('Intermediate xy layer:\n\n',layer_outs[0])
print('Dropout layer:\n\n', layer_outs[1])

您应该看到,根据您的要求,整个x或y被随机丢弃(50%的机会):

Intermediate xy layer:

 [[[0.32093528 0.70682645]
  [0.46162075 0.74063486]
  [0.522718   0.22318116]
  [0.7897043  0.7849486 ]
  [0.49387926 0.13929296]
  [0.5754296  0.6273373 ]
  [0.17157765 0.92996144]
  [0.36210892 0.02305864]
  [0.52637625 0.88259524]
  [0.3184462  0.00197006]
  [0.67196816 0.40147918]
  [0.24782693 0.5766827 ]
  [0.25653633 0.00514544]
  [0.8130438  0.2764429 ]
  [0.25275478 0.44348967]]]

Dropout layer:

 [[[0.         1.4136529 ]
  [0.         1.4812697 ]
  [0.         0.44636232]
  [0.         1.5698972 ]
  [0.         0.2785859 ]
  [0.         1.2546746 ]
  [0.         1.8599229 ]
  [0.         0.04611728]
  [0.         1.7651905 ]
  [0.         0.00394012]
  [0.         0.80295837]
  [0.         1.1533654 ]
  [0.         0.01029088]
  [0.         0.5528858 ]
  [0.         0.88697934]]]

如果您想知道为什么所有元素都乘以2,请看一下tensorflow如何实现dropout here

希望这会有所帮助。