tensorflow / keras中输入的自相关

时间:2017-10-18 05:46:26

标签: tensorflow keras batch-processing convolution

我有1D输入信号。我想计算自相关作为神经网络的一部分,以便在网络内部进一步使用。 我需要用输入本身执行输入的卷积。 在keras自定义图层/张量流中执行卷积。我们需要以下参数 data shape is "[batch, in_height, in_width, in_channels]", filter shape is "[filter_height, filter_width, in_channels, out_channels]

滤镜形状中没有批次,需要在我的情况下输入

3 个答案:

答案 0 :(得分:2)

TensorFlow现在具有auto_correlation功能。它应该在版本1.6中。如果您从源代码构建,则可以立即使用它(请参阅例如the github code)。

答案 1 :(得分:1)

这是一个可能的解决方案。

通过自我卷积,我理解了一个常规的卷积,其中滤波器与输入完全相同(如果它不是那个,对不起我的误解)。

我们需要一个自定义函数和一个Lambda图层。

起初我使用了padding = 'same',它带来了与输入长度相同的输出。我不确定你想要的输出长度,但如果你想要更多,你应该在进行卷积之前自己添加填充。 (在长度为7的示例中,对于从一端到另一端的完整卷积,此手动填充将包括之前的6个零和输入长度之后的6个零,并使用padding = 'valid'。找到backend functions here

工作示例 - 输入(5,7,2)

from keras.models import Model
from keras.layers import *
import keras.backend as K

batch_size = 5
length = 7
channels = 2
channels_batch = batch_size*channels

def selfConv1D(x):
    #this function unfortunately needs to know previously the shapes
    #mainly because of the for loop, for other lines, there are workarounds
    #but these workarounds are not necessary since we'll have this limitation anyway

    #original x: (batch_size, length, channels)

    #bring channels to the batch position:
    x = K.permute_dimensions(x,[2,0,1]) #(channels, batch_size, length)

    #suppose channels are just individual samples (since we don't mix channels)
    x = K.reshape(x,(channels_batch,length,1))

    #here, we get a copy of x reshaped to match filter shapes:
    filters = K.permute_dimensions(x,[1,2,0])  #(length, 1, channels_batch)

    #now, in the lack of a suitable available conv function, we make a loop
    allChannels = []
    for i in range (channels_batch):

        f = filters[:,:,i:i+1]
        allChannels.append(
            K.conv1d(
                x[i:i+1], 
                f, 
                padding='same', 
                data_format='channels_last'))
                    #although channels_last is my default config, I found this bug: 
                    #https://github.com/fchollet/keras/issues/8183

        #convolution output: (1, length, 1)

    #concatenate all results as samples
    x = K.concatenate(allChannels, axis=0) #(channels_batch,length,1)

    #restore the original form (passing channels to the end)
    x = K.reshape(x,(channels,batch_size,length))
    return K.permute_dimensions(x,[1,2,0]) #(batch_size, length, channels)


#input data for the test:
x = np.array(range(70)).reshape((5,7,2))

#little model that just performs the convolution
inp= Input((7,2))
out = Lambda(selfConv1D)(inp)

model = Model(inp,out)

#checking results
p = model.predict(x)
for i in range(5):
    print("x",x[i])
    print("p",p[i])

答案 2 :(得分:0)

您可以通过处理"批量大小"来使用tf.nn.conv3d。作为"深度":

# treat the batch size as depth.
data = tf.reshape(input_data, [1, batch, in_height, in_width, in_channels])
kernel = [filter_depth, filter_height, filter_width, in_channels, out_channels]
out = tf.nn.conv3d(data, kernel, [1,1,1,1,1], padding='SAME')