Tensorflow InvalidArgumentError:indices [40] = 2000 0不在[0,20000]中

时间:2018-03-26 13:33:07

标签: python windows tensorflow

我在我的Windows7上使用python 3.5和tensorflow r0.12 cpu运行了这段代码(https://github.com/monkut/tensorflow_chatbot主要代码在execute.py中),并且在300步之后发生错误。然后我尝试将词汇量大小更改为30000并每100步设置一个checkpiont。对于1层128个单元,在3900步之后出现错误,并且在5400步之后出现3层256单元。 那是什么错误?有办法解决吗?

错误:

>> Mode : train

Preparing data in working_dir/
Creating vocabulary working_dir/vocab20000.enc from data/train.enc
  processing line 100000
>> Full Vocabulary Size : 45408
>>>> Vocab Truncated to: 20000
Creating vocabulary working_dir/vocab20000.dec from data/train.dec
  processing line 100000
>> Full Vocabulary Size : 44271
>>>> Vocab Truncated to: 20000
Tokenizing data in data/train.enc
  tokenizing line 100000
Tokenizing data in data/train.dec
  tokenizing line 100000
Tokenizing data in data/test.enc
Creating 3 layers of 256 units.
Created model with fresh parameters.
Reading development and training data (limit: 0).
  reading data line 100000
global step 300 learning rate 0.5000 step-time 3.34 perplexity 377.45
  eval: bucket 0 perplexity 96.25
  eval: bucket 1 perplexity 210.94
  eval: bucket 2 perplexity 267.86
  eval: bucket 3 perplexity 365.77
Traceback (most recent call last):
  File "C:\Python35 64\lib\site-packages\tensorflow\python\client\session.py", l
ine 1021, in _do_call
    return fn(*args)
  File "C:\Python35 64\lib\site-packages\tensorflow\python\client\session.py", l
ine 1003, in _run_fn
    status, run_metadata)
  File "C:\Python35 64\lib\contextlib.py", line 66, in __exit__
    next(self.gen)
  File "C:\Python35 64\lib\site-packages\tensorflow\python\framework\errors_impl
.py", line 469, in raise_exception_on_not_ok_status
    pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[40] = 2000
0 is not in [0, 20000)
         [[Node: model_with_buckets/sequence_loss_3/sequence_loss_by_example/sam
pled_softmax_loss_28/embedding_lookup_1 = Gather[Tindices=DT_INT64, Tparams=DT_F
LOAT, _class=["loc:@proj_b"], validate_indices=true, _device="/job:localhost/rep
lica:0/task:0/cpu:0"](proj_b/read, model_with_buckets/sequence_loss_3/sequence_l
oss_by_example/sampled_softmax_loss_28/concat)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "execute.py", line 352, in <module>
    train()
  File "execute.py", line 180, in train
    target_weights, bucket_id, False)
  File "C:\Users\Администратор\Downloads\tensorflow_chatbot-master (1)\tensorflo
w_chatbot-master\seq2seq_model.py", line 230, in step
    outputs = session.run(output_feed, input_feed)
  File "C:\Python35 64\lib\site-packages\tensorflow\python\client\session.py", l
ine 766, in run
    run_metadata_ptr)
  File "C:\Python35 64\lib\site-packages\tensorflow\python\client\session.py", l
ine 964, in _run
    feed_dict_string, options, run_metadata)
  File "C:\Python35 64\lib\site-packages\tensorflow\python\client\session.py", l
ine 1014, in _do_run
    target_list, options, run_metadata)
  File "C:\Python35 64\lib\site-packages\tensorflow\python\client\session.py", l
ine 1034, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[40] = 2000
0 is not in [0, 20000)
         [[Node: model_with_buckets/sequence_loss_3/sequence_loss_by_example/sam
pled_softmax_loss_28/embedding_lookup_1 = Gather[Tindices=DT_INT64, Tparams=DT_F
LOAT, _class=["loc:@proj_b"], validate_indices=true, _device="/job:localhost/rep
lica:0/task:0/cpu:0"](proj_b/read, model_with_buckets/sequence_loss_3/sequence_l
oss_by_example/sampled_softmax_loss_28/concat)]]

Caused by op 'model_with_buckets/sequence_loss_3/sequence_loss_by_example/sample
d_softmax_loss_28/embedding_lookup_1', defined at:
  File "execute.py", line 352, in <module>
    train()
  File "execute.py", line 148, in train
    model = create_model(sess, False)
  File "execute.py", line 109, in create_model
    gConfig['learning_rate_decay_factor'], forward_only=forward_only)
  File "C:\Users\Администратор\Downloads\tensorflow_chatbot-master (1)\tensorflo
w_chatbot-master\seq2seq_model.py", line 158, in __init__
    softmax_loss_function=softmax_loss_function)
  File "C:\Python35 64\lib\site-packages\tensorflow\python\ops\seq2seq.py", line
 1130, in model_with_buckets
    softmax_loss_function=softmax_loss_function))
  File "C:\Python35 64\lib\site-packages\tensorflow\python\ops\seq2seq.py", line
 1058, in sequence_loss
    softmax_loss_function=softmax_loss_function))
  File "C:\Python35 64\lib\site-packages\tensorflow\python\ops\seq2seq.py", line
 1022, in sequence_loss_by_example
    crossent = softmax_loss_function(logit, target)
  File "C:\Users\Администратор\Downloads\tensorflow_chatbot-master (1)\tensorflo
w_chatbot-master\seq2seq_model.py", line 101, in sampled_loss
    self.target_vocab_size)
  File "C:\Python35 64\lib\site-packages\tensorflow\python\ops\nn.py", line 1412
, in sampled_softmax_loss
    name=name)
  File "C:\Python35 64\lib\site-packages\tensorflow\python\ops\nn.py", line 1184
, in _compute_sampled_logits
    all_b = embedding_ops.embedding_lookup(biases, all_ids)
  File "C:\Python35 64\lib\site-packages\tensorflow\python\ops\embedding_ops.py"
, line 110, in embedding_lookup
    validate_indices=validate_indices)
  File "C:\Python35 64\lib\site-packages\tensorflow\python\ops\gen_array_ops.py"
, line 1293, in gather
    validate_indices=validate_indices, name=name)
  File "C:\Python35 64\lib\site-packages\tensorflow\python\framework\op_def_libr
ary.py", line 759, in apply_op
    op_def=op_def)
  File "C:\Python35 64\lib\site-packages\tensorflow\python\framework\ops.py", li
ne 2240, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "C:\Python35 64\lib\site-packages\tensorflow\python\framework\ops.py", li
ne 1128, in __init__
    self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): indices[40] = 20000 is not in [0
, 20000)
         [[Node: model_with_buckets/sequence_loss_3/sequence_loss_by_example/sam
pled_softmax_loss_28/embedding_lookup_1 = Gather[Tindices=DT_INT64, Tparams=DT_F
LOAT, _class=["loc:@proj_b"], validate_indices=true, _device="/job:localhost/rep
lica:0/task:0/cpu:0"](proj_b/read, model_with_buckets/sequence_loss_3/sequence_l
oss_by_example/sampled_softmax_loss_28/concat)]]

2 个答案:

答案 0 :(得分:0)

符号[)表示在区间符号中包含独占。 [表示包含该号码(表示不包括该号码。 右括号和括号也是如此,即]&amp; )。例如[0,20000) 意味着从零包含到20000不包括在内。括号意味着&#34;是包括这个&#34;括号表示&#34;不,不要一直到这个数字&#34;

答案 1 :(得分:0)

似乎使用virtualenv和tensorflow-gpu 0.12.0解决了我的问题。