在TensorFlow中实现高效的Hadamard变换

时间:2018-09-18 18:47:16

标签: tensorflow optimization

我想在张量流中实现非常快的Hadamard transform

到目前为止,我最好的尝试是使用einsum方法:

def hadamard_transform(tensor, num_dimensions):
    # Hadamard matrix
    s = math.sqrt(0.5)
    h = tf.constant([[s, s], [s, -s]], dtype=dtype)

    # Apply Hadamard matrix to each dimension of the tensor
    for i in range(num_dimensions):
        tensor = targeted_left_multiply(h, tensor, [i])
    return tensor

def targeted_left_multiply(left_matrix, right_target, target_axes):
    # Left-multiplies a matrix into a tensor.

    k = len(target_axes)
    d = len(right_target.shape)
    work_indices = tuple(range(k))
    data_indices = tuple(range(k, k + d))
    used_data_indices = tuple(data_indices[q] for q in target_axes)
    input_indices = work_indices + used_data_indices
    output_indices = list(data_indices)
    for w, t in zip(work_indices, target_axes):
        output_indices[t] = w

    keys = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'
    input_keys = ''.join(keys[i] for i in input_indices)
    output_keys = ''.join(keys[i] for i in output_indices)
    data_keys = ''.join(keys[i] for i in data_indices)
    formula = '{},{}->{}'.format(input_keys, output_keys, data_keys)
    return tf.einsum(formula,
                     left_matrix,
                     right_target)

但是,在我的测试中,性能却比类似的numpy实现慢了几倍。

有什么技术可以使此代码更高效?

0 个答案:

没有答案