我正在通过packtpub视频系列中的TensorFlow教程。不幸的是,教程中的基本RNN似乎不再有效,或者发生了一些奇怪的事情。任何见解?
以下是我收到的错误:
ValueError:变量RNN / BasicRNNCell / Linear / Matrix已经存在,不允许。你的意思是在VarScope中设置reuse = True吗?最初定义于:
File "<ipython-input-23-dcf4ba3c6842>", line 16, in <module>
outputs, states = tf.nn.dynamic_rnn(cell, x_, dtype = tf.float32, initial_state = None)
File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2869, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2809, in run_ast_nodes
if self.run_code(code, result):
错误似乎表示矩阵或其他东西
以下是它引用的代码
import requests
import numpy as np
import math
import tensorflow as tf
import datetime
from tqdm import tqdm
dataUrl = "https://drcdata.blob.core.windows.net/data/weather.npz"
response = requests.get(dataUrl)
with open("weather.zip", "wb") as code:
code.write(response.content)
#load into np array
data = np.load("weather.zip")
daily = data['daily']
weekly = data['weekly']
更多代码
num_weeks = len(weekly)
dates = np.array([datetime.datetime.strptime(str(int(d)), '%Y%m%d') for d in weekly[:,0]])
def assign_season(date):
month = date.month
#spring = 0
if 3 <= month < 6:
season = 0
#summer = 1
elif 6 <= month < 9:
season = 1
elif 9 <= month < 12:
season = 2
elif month == 12 or month < 3:
season = 3
return season
更多代码
num_classes = 4
num_inputs = 5
#Historical state for RNN size
state_size = 11
labels = np.zeros([num_weeks, num_classes])
#read and convert to one-hot
for i,d in enumerate(dates):
labels[i,assign_season(d)] = 1
#extract and scale training data
train = weekly[:,1:]
train = train - np.average(train,axis=0)
train = train / train.std(axis = 0)
sess = tf.InteractiveSession()
#Inputs
x = tf.placeholder(tf.float32, [None, num_inputs])
#Special RNN TF Input Shape
x_ = tf.reshape(x, [1, num_weeks, num_inputs])
#Define the labels
y_ = tf.placeholder(tf.float32, [None, num_classes])
#Define RNN Cell
#RNN's method for looking back in time.
cell = tf.nn.rnn_cell.BasicRNNCell(state_size)
#Intelligently handles recursion instead of unrolling full computation.
outputs, states = tf.nn.dynamic_rnn(cell, x_, dtype = tf.float32, initial_state = None)
#Define Weights and Biases
W1 = tf.Variable(tf.truncated_normal([state_size, num_classes], stddev = 1.0 / math.sqrt(num_inputs)))
b1 = tf.Variable(tf.constant(0.1, shape = [num_classes]))
#reshape output for normal usage
#h1 = tf.reshape(outputs, [-1, state_size])
#softmax output, remember, its a classifier
y = tf.nn.softmax(tf.matmul(h1, W1) + b1)
培训代码
sess.run(tf.initialize_all_variables())
#Define Cost Function
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y + 1e-50, y_))
#define train step
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
#Define Accuracy
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
#Really train this thing.
epochs = 500
train_acc = np.zeros(epochs//10)
test_acc = np.zeros(epochs//10)
for i in tqdm(range(epochs), ascii=True):
if i % 10 == 0: #record for learning curve display
A = accuracy.eval(feed_dict={x: train, y_: labels})
train_acc[i//10] = A
train_step.run(feed_dict={x: train, y_:labels})
PLOT SOME STUFF
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(train_acc)
答案 0 :(得分:0)
尝试清除默认图表或重置图表(请参阅Remove nodes from graph or reset entire default graph)。用
声明我的图形后,我遇到了同样的错误with tf.Session() as sess:
并重置默认图表为我解决了问题。我的猜测是iPython Notebook在对笔记本单元格的调用之间保持图形状态相同,而当问题作为脚本运行时,图形在每次运行后都被清除。