火炬到张量流

时间:2021-05-25 19:45:53

标签: python tensorflow pytorch

有没有办法将 pytorch 代码转换为 TensorFlow?虽然我对 TensorFlow 有点熟悉,但对 pytorch 完全陌生。例如,

def get_variation_uncertainty(prediction_score_vectors: List[torch.tensor], matrix_size: Tuple) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:

    prediction_score_vectors = torch.stack(tuple(prediction_score_vectors))

    wt_var = np.var(np.sum(prediction_score_vectors[:, :, 1:].cpu().numpy(), axis=2), axis=0).reshape(matrix_size) * 100
    tc_var = np.var(np.sum(prediction_score_vectors[:, :, [1, 3]].cpu().numpy(), axis=2), axis=0).reshape( matrix_size) *100
    et_var = np.var(prediction_score_vectors[:, :, 3].cpu().numpy(), axis=0).reshape(matrix_size) * 100

    return wt_var.astype(np.uint8), tc_var.astype(np.uint8), et_var.astype(np.uint8)

如何获得与上述代码等效的 TensorFlow?

2 个答案:

答案 0 :(得分:1)

您所要做的就是将 张量转换为 并继续使用 操作

代码

def get_variation_uncertainty(prediction_score_vectors, matrix_size):
    prediction_score_vectors = torch.stack(tuple(prediction_score_vectors))
    wt_var = np.var(np.sum(prediction_score_vectors[:, :, 1:].cpu().numpy(), axis=2), axis=0).reshape(matrix_size) * 100
    tc_var = np.var(np.sum(prediction_score_vectors[:, :, [1, 3]].cpu().numpy(), axis=2), axis=0).reshape( matrix_size) *100
    et_var = np.var(prediction_score_vectors[:, :, 3].cpu().numpy(), axis=0).reshape(matrix_size) * 100
    return wt_var.astype(np.uint8), tc_var.astype(np.uint8), et_var.astype(np.uint8)

def get_variation_uncertainty_tf(prediction_score_vectors, matrix_size):
    prediction_score_vectors = tf.stack(prediction_score_vectors).numpy()    
    wt_var = np.var(np.sum(prediction_score_vectors[:, :, 1:], axis=2), axis=0).reshape(matrix_size) * 100
    tc_var = np.var(np.sum(prediction_score_vectors[:, :, [1, 3]], axis=2), axis=0).reshape( matrix_size) *100
    et_var = np.var(prediction_score_vectors[:, :, 3], axis=0).reshape(matrix_size) * 100
    return wt_var.astype(np.uint8), tc_var.astype(np.uint8), et_var.astype(np.uint8)

print (get_variation_uncertainty(prediction_score_vectors, (4,4)))
print (get_variation_uncertainty_tf(prediction_score_vectors, (4,4)))

输出:

(array([[121, 121, 131, 117],
       [120, 103, 126, 135],
       [112, 125, 114, 112],
       [137, 109, 123, 154]], dtype=uint8), array([[18, 15, 19, 20],
       [17, 13, 14, 17],
       [15, 19, 15, 16],
       [18, 17, 15, 17]], dtype=uint8), array([[8, 8, 8, 8],
       [8, 8, 6, 8],
       [7, 8, 7, 7],
       [9, 8, 8, 7]], dtype=uint8))
(array([[121, 121, 131, 117],
       [120, 103, 126, 135],
       [112, 125, 114, 112],
       [137, 109, 123, 154]], dtype=uint8), array([[18, 15, 19, 20],
       [17, 13, 14, 17],
       [15, 19, 15, 16],
       [18, 17, 15, 17]], dtype=uint8), array([[8, 8, 8, 8],
       [8, 8, 6, 8],
       [7, 8, 7, 7],
       [9, 8, 8, 7]], dtype=uint8))

答案 1 :(得分:1)

根据评论,我建议使用更多 tf 函数来提高性能并减少必要的 GPU-CPU 通信量。这是一个例子

@tf.function
def get_variation_uncertainty_tf(prediction_score_vectors, matrix_size):
    prediction_score_vectors = tf.stack(prediction_score_vectors)    
    wt_var_tmp = tf.math.square(tf.math.reduce_std(tf.reduce_sum(prediction_score_vectors[:, :, 1:], axis=2), axis=0))
    # Two steps because that was getting long
    wt_var = tf.reshape(wt_var_tmp, matrix_size) * 100

    tc_var_tmp = tf.math.square(tf.math.reduce_std(prediction_score_vectors[:, :, 1] + prediction_score_vectors[:, :, 3], axis=0))
    tc_var = tf.reshape(tc_var_tmp, matrix_size) * 100

    et_var_tmp = tf.math.square(tf.math.reduce_std(prediction_score_vectors[:, :, 3], axis=0))
    et_var = tf.reshape(et_var_tmp, matrix_size) * 100
    return tf.cast(wt_var, dtype=tf.uint8), tf.cast(tc_var, dtype=tf.uint8), tf.cast(et_var, dtype=tf.uint8)
    # if you need to return np arrays, do that instead of casting, i.e. (wt_var.numpy()).astype(np.uint8)

在这里测试并有效,虽然选择哪种方法在很大程度上取决于数据的形状,但您可以随意尝试更改形状以估计哪种方法最好。在我的测试中,大多数 numpy 代码实际上更好,除非您有很大的尺寸或将批量运行它。 https://colab.research.google.com/drive/1miOG6FV9MInanwwQxkYeSXVYirVeUh1r?usp=sharing