TensorFlow while_loop()的不确定行为

时间:2018-09-15 10:51:31

标签: python python-3.x tensorflow

我已经使用TensorFlow while_loop实现了一个大型矩阵的算法,最近我注意到一种奇怪的行为:我在不同的运行中获得不同的结果,有时甚至得到nan值。我花了一些时间来缩小问题的范围,现在我有以下最小示例。我取一个大小为15000x15000的大矩阵K,并用1填充,然后为用u填充的矢量u计算K⁵u。经过一轮迭代,我期望结果是向量填充了15000。但是,这不会发生。

import numpy as np
import tensorflow as tf

n = 15000
np_kernel_mat = np.ones((n, n), dtype=np.float32)
kernel_mat = tf.constant(np_kernel_mat)

# for debugging
def compare_kernel(kernel_matrix):
    print("AverageDifference:" + str(np.average(np.abs(np_kernel_mat - kernel_matrix))))
    print("AmountDifferent:" + str(np.count_nonzero(np.abs(np_kernel_mat - kernel_matrix))))
    return True

# body of the loop
def iterate(i, u):
    # for debugging
    with tf.control_dependencies(tf.py_func(compare_kernel, [kernel_mat], [tf.bool])):
        u = tf.identity(u)
    # multiply
    u = tf.matmul(kernel_mat, u)
    # check result and kernel 
    u = tf.Print(u, [tf.count_nonzero(tf.abs(kernel_mat-np_kernel_mat))], "AmountDifferentKernel: ")
    u = tf.Print(u, [tf.count_nonzero(tf.abs(u-float(n)))], "AmountDifferentRes: ")
    i = i + 1
    return i, u


def cond(i, u):
    return tf.less(i, 5)

u0 = tf.fill((n, 1), 1.0, name='u0')
iu_0 = (tf.constant(0), u0)
iu_final = tf.while_loop(cond, iterate, iu_0, back_prop=False, parallel_iterations=1)
u_res = iu_final[1]


with tf.Session() as sess:
    kernel_mat_eval, u_res_eval = sess.run([kernel_mat, u_res])
    print(np.array_equal(kernel_mat_eval, np_kernel_mat))

现在运行它,我得到以下输出:

I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:898] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 0 with properties: 
name: GeForce GTX TITAN X major: 5 minor: 2 memoryClockRate(GHz): 1.076
pciBusID: 0000:00:0f.0
totalMemory: 11.93GiB freeMemory: 11.81GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
I tensorflow/core/common_runtime/gpu/gpu_device.cc:929]      0 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0:   N 
I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 11435 MB memory) -> physical GPU (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:00:0f.0, compute capability: 5.2)
minimal_example.py:25: RuntimeWarning: invalid value encountered in subtr[8/281]
  print("AverageDifference:" + str(np.average(np.abs(np_kernel_mat - kernel_matr
ix))))
/usr/local/lib/python3.6/dist-packages/numpy/core/_methods.py:70: RuntimeWarning
: overflow encountered in reduce
  ret = umr_sum(arr, axis, dtype, out, keepdims)
AverageDifference:nan
minimal_example.py:26: RuntimeWarning: invalid value encountered in subtract
  print("AmountDifferent:" + str(np.count_nonzero(np.abs(np_kernel_mat - kernel_
matrix))))
AmountDifferent:4096
AmountDifferentKernel: [0]
AmountDifferentRes, DifferenceRes: [4][inf]
AverageDifference:nan
AmountDifferent:4096
AmountDifferentKernel: [0]
AmountDifferentRes, DifferenceRes: [15000][nan]
AverageDifference:nan
AmountDifferent:4096
AmountDifferentKernel: [0]
AmountDifferentRes, DifferenceRes: [15000][nan]
AverageDifference:nan
...

很明显,在第二次迭代中,结果不再是15000,但这并不能解释为什么差异是nan。在CPU上,一切正常(区别是2e08。)。

现在我的问题是: 为什么“打印”操作的输出与py_func打印的输出不同?为什么对矩阵的求值再次等于原始矩阵?为什么在不同的运行中得到不同的结果?有人可以复制吗?

我正在Ubuntu 16.04TensorFlow 1.8numpy 1.14python3.6上运行它。 GPU是GeForceGTX1080。

NVRM version: NVIDIA UNIX x86_64 Kernel Module  390.48  Thu Mar 22 00:42:57 PDT 2018
GCC 
version:  gcc version 5.4.0 20160609 (Ubuntu 5.4.0-6ubuntu1~16.04.9) 

1 个答案:

答案 0 :(得分:1)

您的问题很可能源于播种问题,请确保为random.seed()numpy.random.seed()都设置了种子。由于numpy的随机种子独立于random随机状态,因此您都需要同时播种。