我创建了一个REST Web服务来执行机器翻译,并在translate.py中进行了一些修改。如果我单独在translate.py中运行decode函数,那么在多次运行中我得到正确的输出。但是当我尝试通过我创建的web服务运行解码功能时,第一次,我得到了翻译结果。但是在第二次迭代中,我收到标题中提到的错误。
这是我得到的错误消息:ValueError:变量proj_w已经存在,不允许。你的意思是在VarScope中设置reuse = true吗?
REST webservice部分:
input = request.json['inputtext']
print "'%s'" % input
print 'Please wait'
#import pdb; pdb.set_trace()
#out = demo1.decode(input);
解码python脚本:
def decode(sentence)
with tf.Session() as sess:
# Create model and load parameters.
model = create_model(sess, True)
model.batch_size = 1 # We decode one sentence at a time.
# Load vocabularies.
en_vocab_path = os.path.join(FLAGS.data_dir,
"vocab%d.en" % FLAGS.en_vocab_size)
fr_vocab_path = os.path.join(FLAGS.data_dir,
"vocab%d.fr" % FLAGS.fr_vocab_size)
en_vocab, _ = data_utils.initialize_vocabulary(en_vocab_path)
_, rev_fr_vocab = data_utils.initialize_vocabulary(fr_vocab_path)
# Decode from standard input.
#sys.stdout.write("> ")
#sys.stdout.flush()
#sentence = sys.stdin.readline()
print ("reading line %s" % sentence)
token_ids = data_utils.sentence_to_token_ids(tf.compat.as_bytes(sentence), en_vocab)
bucket_id = min([b for b in xrange(len(_buckets))
if _buckets[b][0] > len(token_ids)])
encoder_inputs, decoder_inputs, target_weights = model.get_batch(
{bucket_id: [(token_ids, [])]}, bucket_id)
_, _, output_logits = model.step(sess, encoder_inputs, decoder_inputs,
target_weights, bucket_id, True)
outputs = [int(np.argmax(logit, axis=1)) for logit in output_logits]
if data_utils.EOS_ID in outputs:
outputs = outputs[:outputs.index(data_utils.EOS_ID)]
#print(" ".join([tf.compat.as_str(rev_fr_vocab[output]) for output in outputs]))
str1 = ([tf.compat.as_str(rev_fr_vocab[output]) for output in outputs])
output = ' '.join(str1)
print ("output line %s\n" % output)
sys.stdout.flush()
sess.close()
return output
它第一次运作。但是对于webservice的下一个命中,我得到这个错误“ ValueError:变量proj_w已经存在,不允许。你的意思是在VarScope中设置reuse = true吗?”
答案 0 :(得分:1)
通过以下修改,我可以顺利运行Web服务。我只创建了一次模型和tf.session()。早些时候,他们是为网络服务上的每一次点击创建的。
model = None
en_vocab_path =None
fr_vocab_path =None
sess = None
def decode(sentence)
global sess
if sess==None:
sess = tf.Session()
global model
global en_vocab_path
global fr_vocab_path
if model==None:
model = create_model(sess, True)
model.batch_size = 1 # We decode one sentence at a time.
# Load vocabularies.
if en_vocab_path==None:
en_vocab_path = os.path.join(FLAGS.data_dir,
"vocab%d.en" % FLAGS.en_vocab_size)
if fr_vocab_path==None:
fr_vocab_path = os.path.join(FLAGS.data_dir,
"vocab%d.fr" % FLAGS.fr_vocab_size)