什么是K Max Pooling?如何在Keras中实现它?

时间:2019-05-24 23:41:00

标签: machine-learning deep-learning nlp conv-neural-network max-pooling

我必须在CNN模型中添加一个k-max合并层以检测虚假评论。请您让我知道如何使用keras来实现它。

我搜索了互联网,但没有足够的资源。

5 个答案:

答案 0 :(得分:1)

根据此paper,k-Max池化是一种池化操作,是对Max-TDNN句子模型中使用的时间维度上的最大池化的概括 与卷积网络中用于对象识别的局部最大池化操作不同(LeCun等,1998)。

enter image description here

k-max合并操作使其成为可能 汇集p中的k个最活跃特征 相隔多个位置;它保留了订单 的特征,但对它们的具体特性不敏感 职位。

很少有资源显示如何在tensorflow或keras中实现它:

答案 1 :(得分:0)

具有TensorFlow后端的K-Max池的Keras实现

似乎有一个解决方法here,如@Anubhav_Singh所建议。在github keras问题链接上,此响应的最高评价(24)比不满意(5)高出5倍。我只是在这里引用as-is,让人们尝试一下并说它是否对他们有用。

原始作者:arbackus

from keras.engine import Layer, InputSpec
from keras.layers import Flatten
import tensorflow as tf

class KMaxPooling(Layer):
    """
    K-max pooling layer that extracts the k-highest activations from a sequence (2nd dimension).
    TensorFlow backend.
    """
    def __init__(self, k=1, **kwargs):
        super().__init__(**kwargs)
        self.input_spec = InputSpec(ndim=3)
        self.k = k

    def compute_output_shape(self, input_shape):
        return (input_shape[0], (input_shape[2] * self.k))

    def call(self, inputs):

        # swap last two dimensions since top_k will be applied along the last dimension
        shifted_input = tf.transpose(inputs, [0, 2, 1])

        # extract top_k, returns two tensors [values, indices]
        top_k = tf.nn.top_k(shifted_input, k=self.k, sorted=True, name=None)[0]

        # return flattened output
        return Flatten()(top_k)

注意:据报道运行速度非常慢(尽管它对人们有用)。

答案 2 :(得分:0)

这是k-max池的Pytorch版实现:

import torch
def kmax_pooling(x, dim, k):
    index = x.topk(k, dim = dim)[1].sort(dim = dim)[0]
    return x.gather(dim, index)

希望这会有所帮助。

答案 3 :(得分:0)

检查一下。没有经过全面测试,但对我来说很好。让我知道你的想法。附言最新的tensorflow版本。

tf.nn.top_k不保留值出现的顺序。所以,这就是需要努力的想法

import tensorflow as tf
from tensorflow.keras import layers
class KMaxPooling(layers.Layer):
    """
    K-max pooling layer that extracts the k-highest activations from a sequence (2nd dimension).
    TensorFlow backend.
    """
    def __init__(self, k=1, axis=1, **kwargs):
        super(KMaxPooling, self).__init__(**kwargs)
        self.input_spec = layers.InputSpec(ndim=3)
        self.k = k

        assert axis in [1,2],  'expected dimensions (samples, filters, convolved_values),\
                   cannot fold along samples dimension or axis not in list [1,2]'
        self.axis = axis

        # need to switch the axis with the last elemnet
        # to perform transpose for tok k elements since top_k works in last axis
        self.transpose_perm = [0,1,2] #default
        self.transpose_perm[self.axis] = 2
        self.transpose_perm[2] = self.axis

    def compute_output_shape(self, input_shape):
        input_shape_list = list(input_shape)
        input_shape_list[self.axis] = self.k
        return tuple(input_shape_list)

    def call(self, x):
        # swap sequence dimension to get top k elements along axis=1
        transposed_for_topk = tf.transpose(x, perm=self.transpose_perm)

        # extract top_k, returns two tensors [values, indices]
        top_k_vals, top_k_indices = tf.math.top_k(transposed_for_topk,
                                                  k=self.k, sorted=True,
                                                  name=None)
        # maintain the order of values as in the paper
        # sort indices
        sorted_top_k_ind = tf.sort(top_k_indices)
        flatten_seq = tf.reshape(transposed_for_topk, (-1,))
        shape_seq = tf.shape(transposed_for_topk)
        len_seq = tf.shape(flatten_seq)[0]
        indices_seq = tf.range(len_seq)
        indices_seq = tf.reshape(indices_seq, shape_seq)
        indices_gather = tf.gather(indices_seq, 0, axis=-1)
        indices_sum = tf.expand_dims(indices_gather, axis=-1)
        sorted_top_k_ind += indices_sum
        k_max_out = tf.gather(flatten_seq, sorted_top_k_ind)
        # return back to normal dimension but now sequence dimension has only k elements
        # performing another transpose will get the tensor back to its original shape
        # but will have k as its axis_1 size
        transposed_back = tf.transpose(k_max_out, perm=self.transpose_perm)

        return transposed_back

答案 4 :(得分:0)

这是我在上面@Anubhav Singh 的评论中解释的 k-max 池化的实现(保留了 topk 的顺序)

def test60_simple_test(a):
# swap last two dimensions since top_k will be applied along the last dimension
    #shifted_input = tf.transpose(a) #[0, 2, 1]

    # extract top_k, returns two tensors [values, indices]
    res = tf.nn.top_k(a, k=3, sorted=True, name=None)
    b = tf.sort(res[1],axis=0,direction='ASCENDING',name=None)
    e=tf.gather(a,b)
    #e=e[0:3]
    return (e)

a = tf.constant([7, 2, 3, 9, 5], dtype = tf.float64) 
print('*input:',a)
print('**output', test60_simple_test(a))  

结果:

*input: tf.Tensor([7. 2. 3. 9. 5.], shape=(5,), dtype=float64)
**output tf.Tensor([7. 9. 5.], shape=(3,), dtype=float64)