tf.unstack
未能按预期工作。它没有将R
等级张量降低到R-1
等级张量
张量流github问题列表中的相应问题:https://github.com/tensorflow/tensorflow/issues/22223
代码:
#! /usr/bin/env python
# -*- coding: utf-8 -*-
import sys
import tensorflow as tf
rnn_model = tf.contrib.cudnn_rnn.CudnnGRU(
num_layers=1,
num_units=64,
direction='unidirectional')
rnn_model.build([3, 1, 3])
inputs=[[[1,1,1],[1,1,1],[1,1,1]]]
inputs_tensor= tf.convert_to_tensor(inputs, dtype=tf.float32)
print(tf.shape(inputs_tensor))
rnn_out, rnn_state = rnn_model(inputs_tensor)
print("rnn_state: ", rnn_state)
rnn_layers = tf.unstack(rnn_state)
print("rnn_layers", rnn_layers)
将代码粘贴到文件demo.py
,然后从linux命令行运行:
$ python3.6 demo.py
输出:
Tensor("Shape:0", shape=(3,), dtype=int32)
rnn_state: (<tf.Tensor 'cudnn_gru/CudnnRNN:1' shape=(1, ?, 64) dtype=float32>,)
rnn_layers [<tf.Tensor 'unstack:0' shape=(1, ?, 64) dtype=float32>]
rnn_layers
应该是rnn_layers [<tf.Tensor 'unstack:0' shape=(?, 64) dtype=float32>]
是
$uname -r
3.10.0-327.el7.x86_64
不移动
anaconda tf 1.8
$conda list|grep tensor
tensorboard 1.8.0 py36hf484d3e_0
tensorflow 1.8.0 hb381393_0
tensorflow-base 1.8.0 py36h4df133c_0
tensorflow-gpu 1.8.0 h7b35bdc_0
$python3.6 -V
Python 3.6.2 :: Continuum Analytics, Inc.
$bazel version
Build label: 0.4.5
Build target: bazel-out/local-fastbuild/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar
Build time: Thu Mar 16 12:19:38 2017 (1489666778)
Build timestamp: 1489666778
Build timestamp as int: 1489666778
$conda list|grep -i cuda
cudatoolkit 8.0 3 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free
cudnn 7.0.5 cuda8.0_0
== cat /etc/issue ===============================================
Linux rvab01298.sqa.ztt 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
VERSION="7.2 (Paladin)"
VERSION_ID="7.2"
qihoo360_BUGZILLA_PRODUCT_VERSION=7.2
qihoo360_SUPPORT_PRODUCT_VERSION=7.2
== are we in docker =============================================
No
== compiler =====================================================
c++ (GCC) 4.9.2
Copyright (C) 2014 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
== uname -a =====================================================
Linux rvab01298.sqa.ztt 3.10.0-327.el7.x86_64 #1 SMP Thu Nov 19 22:10:57 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
== check pips ===================================================
numpy (1.13.3)
protobuf (3.5.1)
tensorflow (1.8.0)
== check for virtualenv =========================================
False
== tensorflow import ============================================
tf.VERSION = 1.8.0
tf.GIT_VERSION = b'unknown'
tf.COMPILER_VERSION = b'unknown'
Sanity check: array([1], dtype=int32)
== env ==========================================================
LD_LIBRARY_PATH :/usr/local/mpc-0.8.1/lib:/usr/local/gmp-4.3.2/lib:/usr/local/mpfr-2.4.2/lib:/gruntdata/qihoo360/cuda/lib64
DYLD_LIBRARY_PATH is unset
== nvidia-smi ===================================================
Wed Sep 12 13:34:30 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.26 Driver Version: 375.26 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla K40m On | 0000:02:00.0 Off | 0 |
| N/A 36C P0 67W / 235W | 1161MiB / 11439MiB | 39% Default |
+-------------------------------+----------------------+----------------------+
| 1 Tesla K40m On | 0000:03:00.0 Off | 0 |
| N/A 35C P0 60W / 235W | 73MiB / 11439MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 13950 C bin/arks 868MiB |
| 0 27880 C python3.6 288MiB |
| 1 27880 C python3.6 71MiB |
+-----------------------------------------------------------------------------+
== cuda libs ===================================================
/usr/local/cuda-8.0/doc/man/man7/libcudart.7
/usr/local/cuda-8.0/doc/man/man7/libcudart.so.7
/usr/local/cuda-8.0/lib64/libcudart_static.a
/usr/local/cuda-8.0/lib64/libcudart.so.8.0.61
/usr/local/cuda-7.5/doc/man/man7/libcudart.7
/usr/local/cuda-7.5/doc/man/man7/libcudart.so.7
/usr/local/cuda-7.5/lib64/libcudart.so.7.5.18
/usr/local/cuda-7.5/lib64/libcudart_static.a
/usr/local/cuda-7.5/lib/libcudart.so.7.5.18
/usr/local/cuda-7.5/lib/libcudart_static.a
答案 0 :(得分:0)
解决了,返回值格式更改为元组