重要的说明:我只在笔记本环境中运行此部分(图形定义)。我尚未进行实际的会话。
运行此代码时:
with graph.as_default(): #took out " , tf.device('/cpu:0')"
saver = tf.train.Saver()
valid_examples = np.array(random.sample(range(1, valid_window), valid_size)) #put inside graph to get new words each time
train_dataset = tf.placeholder(tf.int32, shape=[batch_size, cbow_window*2 ])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
valid_datasetSM = tf.constant(valid_examples, dtype=tf.int32)
embeddings = tf.get_variable( 'embeddings',
initializer= tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
softmax_weights = tf.get_variable( 'softmax_weights',
initializer= tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))
softmax_biases = tf.get_variable('softmax_biases',
initializer= tf.zeros([vocabulary_size]), trainable=False )
embed = tf.nn.embedding_lookup(embeddings, train_dataset) #train data set is
embed_reshaped = tf.reshape( embed, [batch_size*cbow_window*2, embedding_size] )
segments= np.arange(batch_size).repeat(cbow_window*2)
averaged_embeds = tf.segment_mean(embed_reshaped, segments, name=None)
#return tf.reduce_mean( tf.nn.sampled_softmax_loss(weights=softmax_weights, biases=softmax_biases, inputs=averaged_embeds,
#labels=train_labels, num_sampled=num_sampled, num_classes=vocabulary_size))
loss = tf.reduce_mean(
tf.nn.sampled_softmax_loss(weights=softmax_weights, biases=softmax_biases, inputs=averaged_embeds,
labels=train_labels, num_sampled=num_sampled, num_classes=vocabulary_size))
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keepdims=True))
normSM = tf.sqrt(tf.reduce_sum(tf.square(softmax_weights), 1, keepdims=True))
normalized_embeddings = embeddings / norm
normalized_embeddingsSM = softmax_weights / normSM
valid_embeddings = tf.nn.embedding_lookup(
normalized_embeddings, valid_dataset)
valid_embeddingsSM = tf.nn.embedding_lookup(
normalized_embeddingsSM, valid_datasetSM)
similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
similaritySM = tf.matmul(valid_embeddingsSM, tf.transpose(normalized_embeddingsSM))
我收到此错误
ValueError:没有要保存的变量
同时指向此行
saver = tf.train.Saver()
我搜索了堆栈溢出并找到了答案
Tensorflow ValueError: No variables to save from
所以我只是简单地将这条线放在图形定义的底部
with graph.as_default(): #took out " , tf.device('/cpu:0')"
valid_examples = np.array(random.sample(range(1, valid_window), valid_size)) #put inside graph to get new words each time
train_dataset = tf.placeholder(tf.int32, shape=[batch_size, cbow_window*2 ])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
valid_datasetSM = tf.constant(valid_examples, dtype=tf.int32)
embeddings = tf.get_variable( 'embeddings',
initializer= tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
softmax_weights = tf.get_variable( 'softmax_weights',
initializer= tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))
softmax_biases = tf.get_variable('softmax_biases',
initializer= tf.zeros([vocabulary_size]), trainable=False )
embed = tf.nn.embedding_lookup(embeddings, train_dataset) #train data set is
embed_reshaped = tf.reshape( embed, [batch_size*cbow_window*2, embedding_size] )
segments= np.arange(batch_size).repeat(cbow_window*2)
averaged_embeds = tf.segment_mean(embed_reshaped, segments, name=None)
loss = tf.reduce_mean(
tf.nn.sampled_softmax_loss(weights=softmax_weights, biases=softmax_biases, inputs=averaged_embeds,
labels=train_labels, num_sampled=num_sampled, num_classes=vocabulary_size))
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keepdims=True))
normSM = tf.sqrt(tf.reduce_sum(tf.square(softmax_weights), 1, keepdims=True))
normalized_embeddings = embeddings / norm
normalized_embeddingsSM = softmax_weights / normSM
valid_embeddings = tf.nn.embedding_lookup(
normalized_embeddings, valid_dataset)
valid_embeddingsSM = tf.nn.embedding_lookup(
normalized_embeddingsSM, valid_datasetSM)
similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
similaritySM = tf.matmul(valid_embeddingsSM, tf.transpose(normalized_embeddingsSM))
saver = tf.train.Saver()
然后就没有错误!
为什么会这样?图形定义仅定义图形,不运行任何图形。也许这是一个漏洞预防措施?
答案 0 :(得分:3)
它不是必须的。 tf.train.Saver
有一个defer_build
参数,如果设置为True
,则可以在构造变量后定义变量。然后,您需要显式调用build
。
saver = tf.train.Saver(defer_build=True)
# construct your graph, create variables...
...
saver.build()
graph.finalize()
# go on with training
答案 1 :(得分:2)
在the documentation on tf.train.Saver中,__init__
方法具有参数var_list
,其描述为:
var_list: A list of Variable/SaveableObject, or a dictionary mapping names
to SaveableObjects. If None, defaults to the list of all saveable objects.
这表明该保护程序会创建一个变量列表,以便在其首次创建时进行保存,默认情况下,该目录包含它可以找到的所有变量。如果未进行任何变量设置,则由于没有要保存的变量,因此会出现错误。
随机示例:
import tensorflow as tf
saver = tf.train.Saver()
上方抛出错误,下方抛出错误
import tensorflow as tf
x = tf.placeholder(dtype=tf.float32)
saver = tf.train.Saver()
但是最后一个示例运行了,
import tensorflow as tf
x = tf.Variable(0.0)
saver = tf.train.Saver()