将函数应用于3D张量,同时忽略零行和填充

时间:2018-05-15 14:16:45

标签: python python-3.x tensorflow

我目前正在努力改善张量流管道中最昂贵的操作的运行时间。

我正在尝试完成以下操作:我获得了包含一些患者数据的多个样本的3D张量,例如数据可能看起来像这样

n_hidden = 3  #number of elements per 1D tensor
batch_size = 3 #number of patients
n_mc_samples = 2 #number of samples per patient
rnn_grid_times = [2,3,1] #number of non zero 1D tensors per patient
all_outputs = tf.constant([[[0.15, 0.874, 0.2], [0.1,0.00878,0.58],[0.0,0.0,0.0]], #beginning of patient 1
                               [[0.456,0.454,0.003],[0.4564,0.4984,0.21], [0.0,0.0,0.0]],
                               [[0.121,0.22,0.45],[0.15,0.488,0.222], [0.11,0.849,0.45]],  #beginning of patient 2
                               [[0.15, 0.5646, 0.15], [0.45,0.48949,0.56465], [0.4489,0.456,0.9]],
                               [[0.121, 0.22, 0.01], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]], #beginning of patient 3
                               [[0.15, 0.89, 0.42], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]])

该数据对应于3名患者,每位患者采样两次。如您所见,患者1和3的数据被填充以具有与患者2相同的大小。

我的目标是将每个非零1D张量馈送到具有一个隐藏层的单个输出神经网络,然后在零张量的位置处添加额外的填充,以便在患者之间保持统一的维度。所以这里有一个有效的结果

[[-0.11379365, -0.11188659,  0.        ],
 [-0.11379365, -0.11379365,  0.        ],
 [-0.1135166 , -0.11379365, -0.11379365],
 [-0.11379365, -0.11359671, -0.11270589],
 [-0.11379365,  0.        ,  0.        ],
 [-0.11379365,  0.        ,  0.        ]]

重申一下,因为我意识到这有点复杂,第一个代码块中[0.15, 0.874, 0.2]-0.11379365相关的输出在第二个代码块中为import tensorflow as tf RANDOM_SEED = 42 tf.set_random_seed(RANDOM_SEED) def code(): n_hidden = 3 batch_size = 3 n_mc_samples = 2 num_rnn_grid_times = tf.constant([2, 3, 1]) all_outputs = tf.constant([[[0.15, 0.874, 0.2], [0.1,0.00878,0.58],[0.0,0.0,0.0]], #beginning of patient 1 [[0.456,0.454,0.003],[0.4564,0.4984,0.21], [0.0,0.0,0.0]], [[0.121,0.22,0.45],[0.15,0.488,0.222], [0.11,0.849,0.45]], #beginning of patient 2 [[0.15, 0.5646, 0.15], [0.45,0.48949,0.56465], [0.4489,0.456,0.9]], [[0.121, 0.22, 0.01], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]], #beginning of patient 3 [[0.15, 0.89, 0.42], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]]) n_extra_hidden_nodes = 2 extra_hidden_weights = tf.Variable(tf.random_normal([n_hidden, n_extra_hidden_nodes], stddev=0.1), name="HiddenSoftmax/W") extra_hidden_biases = tf.Variable(tf.random_normal([n_extra_hidden_nodes], stddev=0.1), name="HiddenSoftmax/b") out_weights = tf.Variable(tf.random_normal([n_extra_hidden_nodes, 1], stddev=0.1), name="Softmax/W") out_biases = tf.Variable(tf.random_normal([1], stddev=0.1), name="Softmax/b") nneth_array_total = tf.Variable([]) n = tf.constant(0) inner_cond = lambda i, nneth_array, n: tf.less(i, num_rnn_grid_times[tf.floordiv(n,n_mc_samples)]) cond = lambda n, nneth_array_total: tf.less(n, batch_size*n_mc_samples) def inner_body(i, nneth_array, n): hidden = tf.nn.relu(tf.matmul(tf.expand_dims(all_outputs[n][i], 0), extra_hidden_weights) + extra_hidden_biases) nneth = tf.matmul(hidden, out_weights) + out_biases nneth = tf.reshape(nneth, [1]) #single output for the neural net nneth_array = tf.concat([nneth_array, nneth], 0) return i + 1, nneth_array, n def body(n, nneth_array_total): nneth_array = tf.Variable([]) i = tf.constant(0) #iterator over 1D tensors i, nneth_array, n = tf.while_loop(inner_cond, inner_body, loop_vars=[i, nneth_array, n], shape_invariants=[i.get_shape(), tf.TensorShape([None]), n.get_shape()]) padding = tf.zeros([tf.reduce_max(num_rnn_grid_times) - num_rnn_grid_times[tf.floordiv(n,n_mc_samples)]],dtype=tf.float32) nneth_array = tf.concat([nneth_array, padding],0) #add extra zeros so that all nneth_arrays have same shape nneth_array_total= tf.concat([nneth_array_total, nneth_array], 0) return n+1, nneth_array_total n, nneth_array_total = tf.while_loop(cond, body, loop_vars=[n, nneth_array_total], shape_invariants=[n.get_shape(), tf.TensorShape([None])]) nneth_array_total = tf.reshape(nneth_array_total, [batch_size*n_mc_samples, tf.reduce_max(num_rnn_grid_times)]) preds = nneth_array_total return preds if __name__ == '__main__': pred = code() init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) sess.run(init) print(sess.run([pred]))

这是隔离的代码,上面提供了玩具数据。如果你有一个工作的张量流环境

,这应该可以运行而没有问题
import { App } from ../index

代码有效,但速度很慢。这是一个管道的一部分,需要大约1.25秒迭代一个病人,并且看起来大量的运行时间是由于上面的代码。这意味着我的数据集的一个纪元大约需要12个小时,与类似的方法相比有点太多了。

我已经google了一下,找到了将函数应用于多维张量的方法,但没有考虑填充的方法。任何见解?

1 个答案:

答案 0 :(得分:1)

即使使用零向量也可以输入整个输入,从而获得最快的处理时间。但正如你所说,由于网络中的偏见,这将返回非零输出。由于当输入向量需要为零时你想要输出为零,一个简单的技巧就是应用一个掩码,如果输入向量为零,它将使预测为零。

当输入向量非零但在0时返回1的掩码可以通过一个简单的逻辑获得:

'インスタントグラム' 

然后将预测与掩码相乘。