tensorflow中的float32占位符继续返回0

时间:2017-03-08 09:26:35

标签: python tensorflow

尝试运行时:

import tensorflow as tf
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
adder_node = a + b
print(sess.run(adder_node, {a: 3, b:4.5}))
print(sess.run(adder_node, {a: [1,3], b: [2, 4]}))

我继续获得以下结果: 0.0 [0. 0.]。 还有其他人遇到过同样的问题吗?

在将占位符更改为int32时,结果开始有意义......

我正在使用: Tensorflow 1.0.1 python 3.5.2+并且还在python 2.7.12+上尝试过,并得到了相同的结果。

这是完整的输出

I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally

I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally

I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally

I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally

I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally

W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations.

W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.

W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.

W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.

W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.

W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.

I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties: 

name: TITAN X (Pascal)

major: 6 minor: 1 memoryClockRate (GHz) 1.531

pciBusID 0000:02:00.0

Total memory: 11.90GiB

Free memory: 11.60GiB

W tensorflow/stream_executor/cuda/cuda_driver.cc:590] creating context when one is currently active; existing: 0x33a4610

I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 1 with properties: 

name: GeForce GT 740

major: 3 minor: 0 memoryClockRate (GHz) 1.0715

pciBusID 0000:03:00.0

Total memory: 1.95GiB

Free memory: 1.66GiB

I tensorflow/core/common_runtime/gpu/gpu_device.cc:777] Peer access not supported between device ordinals 0 and 1

I tensorflow/core/common_runtime/gpu/gpu_device.cc:777] Peer access not supported between device ordinals 1 and 0

I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 1 

I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0:   Y N

I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 1:   N Y 

I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: TITAN X (Pascal), pci bus id: 0000:02:00.0)

I tensorflow/core/common_runtime/gpu/gpu_device.cc:962] Ignoring gpu device (device: 1, name: GeForce GT 740, pci bus id: 0000:03:00.0) with Cuda multiprocessor count: 2. The minimum required count is 3. You can adjust this requirement with the env var TF_MIN_GPU_MULTIPROCESSOR_COUNT.

7.0

[ 7.  0.]

0 个答案:

没有答案