卷积一维计算实际上如何工作?

时间:2019-06-08 14:11:03

标签: keras time-series conv-neural-network convolution tf.keras

我试图通过膨胀实现一维卷积

#keras.layers.Conv1D(filters, kernel_size, strides=1, padding='valid', data_format='channels_last', dilation_rate=1, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)
# valid , causal , same 
conv = layers.Conv1D(1, 3, padding='same',
                     dilation_rate=1,
                     bias_initializer=tf.keras.initializers.zeros)

我想了解这种1d卷积的卷积实际上是如何使输出

让我们输入一个

np.squeeze(sequence.numpy())
array([0.        , 0.32380696, 0.61272254, 0.83561502, 0.96846692])

,我们有

的卷积滤波器
np.squeeze(conv.trainable_variables[0].numpy())
array([-0.56509803,  0.89481053,  0.6975754 ])

当我们通过卷积时,输出将是

output = conv(sequence)
np.squeeze(output.numpy())
array([0.        , 0.22587977, 0.71716606, 0.94819239, 1.07704752])

尝试通过扩张实现波网1d卷积

我想知道如何计算此输出值
如果过滤器大小和kernel_size更改为不同的数字怎么办?

  

conv =层.Conv1D(2,3,padding ='causal',                        dilation_rate = 1,                        bias_initializer = tf.keras.initializers.zeros)

     

conv =层.Conv1D(3,3,padding ='causal',                        dilation_rate = 1,                        bias_initializer = tf.keras.initializers.zeros)

     

conv =层.Conv1D(1,3,padding ='causal',                        dilation_rate = 2,                        bias_initializer = tf.keras.initializers.zeros)

     

conv =层.Conv1D(2,3,padding ='same',                        dilation_rate = 1,                        bias_initializer = tf.keras.initializers.zeros)

     

conv =层.Conv1D(3,3,padding ='same',                        dilation_rate = 1,                        bias_initializer = tf.keras.initializers.zeros)

     

conv =层.Conv1D(1,3,padding ='same',                        dilation_rate = 2,                        bias_initializer = tf.keras.initializers.zeros)

0 个答案:

没有答案