使用自己的数据使用Tensor2Tensor训练变压器

时间:2019-04-17 20:56:09

标签: python tensorflow tensor2tensor

我正在尝试使用Tensor2Tensor训练一个Transformer网络。我正在修改Cloud Poetry示例以适合我自己的任务kt_problem,在这里我将浮点数序列映射到浮点数序列,而不是将句子映射到句子。

我已经根据分散的规范调整了generate_data()generate_samples()函数,以便将自己的数据与tensor2tensor一起使用(例如,数据生成README,行174 of the Problem class等)。它们如下:

  def generate_samples(self, data_dir, tmp_dir, train):
    import numpy as np
    features = pd.read_csv("data/kt/features.csv", dtype=np.float64)
    targets = pd.read_csv("data/kt/targets.csv", dtype=np.float64)
    for i in range(len(features)-1):
        yield {
                "inputs": list(features.iloc[i]),
                "targets": list(targets.iloc[i])
        }




  def generate_data(self, data_dir, tmp_dir, task_id=-1):
        generator_utils.generate_dataset_and_shuffle(
        self.generate_samples(data_dir,tmp_dir,1),
        self.training_filepaths(data_dir,4,False),
        self.generate_samples(data_dir,tmp_dir,0),
        self.dev_filepaths(data_dir,3,False))

在我的课程KTProblem中定义。

进行此更改后,我可以成功运行

PROBLEM='kt_problem'    #my own problem, for which I've defined a class

%%bash
DATA_DIR=./t2t_data     
TMP_DIR=$DATA_DIR/tmp

t2t-datagen \
  --t2t_usr_dir=./kt/trainer \
  --problem=$PROBLEM \
  --data_dir=$DATA_DIR \
  --tmp_dir=$TMP_DIR

并生成一堆训练文件和开发文件。但是当我尝试使用此代码在其上训练变压器时,

%%bash
DATA_DIR=./t2t_data
OUTDIR=./trained_model

t2t-trainer \
  --data_dir=$DATA_DIR \
  --t2t_usr_dir=./kt/trainer \
  --problem=$PROBLEM \
  --model=transformer \
  --hparams_set=transformer_kt \
  --output_dir=$OUTDIR --job-dir=$OUTDIR --train_steps=10

它引发以下错误:

ValueError: x has to be a floating point tensor since it's going to be scaled. Got a <dtype: 'int32'> tensor instead.

正如您在generate_samples()中所看到的那样,生成的数据是np.float64,因此我确定我的输入不应该是int32。堆栈跟踪(在下面发布)非常长,我一直在浏览列出的每一行并检查输入的类型,以查看此int32输入在图片中的位置,但是我找不到它。我想知道(1)为什么,如果我的输入是浮点数,为什么/如何/在什么地方变成浮点数,但是大多数情况下(2),这样的调试代码如何?到目前为止,我的方法一直是将打印语句放在堆栈跟踪的每一行之前,但这似乎是一种幼稚的调试方式。使用VScode会更好,还是在这种情况下,库tensor2tensor的运行方式不符合我的预期,但我不想在这里学习什么?熟悉堆栈跟踪中的每个函数在做什么?

堆栈跟踪:

INFO:tensorflow:Importing user module trainer from path /home/crytting/kt/kt
WARNING:tensorflow:From /home/crytting/kt/tensor2tensor/tensor2tensor/utils/trainer_lib.py:240: RunConfig.__init__ (from tensorflow.contrib.learn.python.learn.estimators.run_config) is deprecated and will be removed in a future version.
Instructions for updating:
When switching to tf.estimator.Estimator, use tf.estimator.RunConfig instead.
INFO:tensorflow:Configuring DataParallelism to replicate the model.
INFO:tensorflow:schedule=continuous_train_and_eval
INFO:tensorflow:worker_gpu=1
INFO:tensorflow:sync=False
WARNING:tensorflow:Schedule=continuous_train_and_eval. Assuming that training is running on a single machine.
INFO:tensorflow:datashard_devices: ['gpu:0']
INFO:tensorflow:caching_devices: None
INFO:tensorflow:ps_devices: ['gpu:0']
INFO:tensorflow:Using config: {'_task_type': None, '_task_id': 0, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f04151caba8>, '_master': '', '_num_ps_replicas': 0, '_num_worker_replicas': 0, '_environment': 'local', '_is_chief': True, '_evaluation_master': '', '_train_distribute': None, '_eval_distribute': None, '_device_fn': None, '_tf_config': gpu_options {
  per_process_gpu_memory_fraction: 1.0
}
, '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_secs': None, '_log_step_count_steps': 100, '_protocol': None, '_session_config': gpu_options {
  per_process_gpu_memory_fraction: 0.95
}
allow_soft_placement: true
graph_options {
  optimizer_options {
    global_jit_level: OFF
  }
}
isolate_session_state: true
, '_save_checkpoints_steps': 1000, '_keep_checkpoint_max': 20, '_keep_checkpoint_every_n_hours': 10000, '_model_dir': './trained_model', 'use_tpu': False, 't2t_device_info': {'num_async_replicas': 1}, 'data_parallelism': <tensor2tensor.utils.expert_utils.Parallelism object at 0x7f0464512dd8>}
WARNING:tensorflow:Estimator's model_fn (<function T2TModel.make_estimator_model_fn.<locals>.wrapping_model_fn at 0x7f0414891e18>) includes params argument, but params are not passed to Estimator.
WARNING:tensorflow:ValidationMonitor only works with --schedule=train_and_evaluate
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 1000 or save_checkpoints_secs None.
WARNING:tensorflow:From /home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
INFO:tensorflow:Reading data files from ./t2t_data/kt_problem-train*
INFO:tensorflow:partition: 0 num_data_files: 4
WARNING:tensorflow:From /home/crytting/kt/tensor2tensor/tensor2tensor/utils/data_reader.py:275: tf_record_iterator (from tensorflow.python.lib.io.tf_record) is deprecated and will be removed in a future version.
Instructions for updating:
Use eager execution and: 
`tf.data.TFRecordDataset(path)`
WARNING:tensorflow:From /home/crytting/kt/tensor2tensor/tensor2tensor/utils/data_reader.py:37: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:Shapes are not fully defined. Assuming batch_size means tokens.
WARNING:tensorflow:From /home/crytting/kt/tensor2tensor/tensor2tensor/utils/data_reader.py:233: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Setting T2TModel mode to 'train'
INFO:tensorflow:Using variable initializer: uniform_unit_scaling
INFO:tensorflow:Building model body
WARNING:tensorflow:From /home/crytting/kt/tensor2tensor/tensor2tensor/models/transformer.py:156: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
Traceback (most recent call last):
  File "/home/crytting/anaconda3/envs/kt/bin/t2t-trainer", line 33, in <module>
    tf.app.run()
  File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 125, in run
    _sys.exit(main(argv))
  File "/home/crytting/anaconda3/envs/kt/bin/t2t-trainer", line 28, in main
    t2t_trainer.main(argv)
  File "/home/crytting/kt/tensor2tensor/tensor2tensor/bin/t2t_trainer.py", line 400, in main
    execute_schedule(exp)
  File "/home/crytting/kt/tensor2tensor/tensor2tensor/bin/t2t_trainer.py", line 356, in execute_schedule
    getattr(exp, FLAGS.schedule)()
  File "/home/crytting/kt/tensor2tensor/tensor2tensor/utils/trainer_lib.py", line 400, in continuous_train_and_eval
    self._eval_spec)
  File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/training.py", line 471, in train_and_evaluate
    return executor.run()
  File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/training.py", line 611, in run
    return self.run_local()
  File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/training.py", line 712, in run_local
    saving_listeners=saving_listeners)
  File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 358, in train
    loss = self._train_model(input_fn, hooks, saving_listeners)
  File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1124, in _train_model
    return self._train_model_default(input_fn, hooks, saving_listeners)
  File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1155, in _train_model_default
    features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
  File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1112, in _call_model_fn
    model_fn_results = self._model_fn(features=features, **kwargs)
  File "/home/crytting/kt/tensor2tensor/tensor2tensor/utils/t2t_model.py", line 1414, in wrapping_model_fn
    use_tpu=use_tpu)
  File "/home/crytting/kt/tensor2tensor/tensor2tensor/utils/t2t_model.py", line 1477, in estimator_model_fn
    logits, losses_dict = model(features)  # pylint: disable=not-callable
  File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow/python/layers/base.py", line 530, in __call__
    outputs = super(Layer, self).__call__(inputs, *args, **kwargs)
  File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 554, in __call__
    outputs = self.call(inputs, *args, **kwargs)
  File "/home/crytting/kt/tensor2tensor/tensor2tensor/utils/t2t_model.py", line 323, in call
    sharded_logits, losses = self.model_fn_sharded(sharded_features)
  File "/home/crytting/kt/tensor2tensor/tensor2tensor/utils/t2t_model.py", line 400, in model_fn_sharded
    sharded_logits, sharded_losses = dp(self.model_fn, datashard_to_features)
  File "/home/crytting/kt/tensor2tensor/tensor2tensor/utils/expert_utils.py", line 231, in __call__
    outputs.append(fns[i](*my_args[i], **my_kwargs[i]))
  File "/home/crytting/kt/tensor2tensor/tensor2tensor/utils/t2t_model.py", line 428, in model_fn
    body_out = self.body(transformed_features)
  File "/home/crytting/kt/tensor2tensor/tensor2tensor/models/transformer.py", line 280, in body
    **decode_kwargs
  File "/home/crytting/kt/tensor2tensor/tensor2tensor/models/transformer.py", line 217, in decode
    **kwargs)
  File "/home/crytting/kt/tensor2tensor/tensor2tensor/models/transformer.py", line 156, in transformer_decode
    1.0 - hparams.layer_prepostprocess_dropout)
  File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow/python/ops/nn_ops.py", line 2979, in dropout
    return dropout_v2(x, rate, noise_shape=noise_shape, seed=seed, name=name)
  File "/home/crytting/anaconda3/envs/kt/lib/python3.7/site-packages/tensorflow/python/ops/nn_ops.py", line 3021, in dropout_v2
    " be scaled. Got a %s tensor instead." % x.dtype)
ValueError: x has to be a floating point tensor since it's going to be scaled. Got a <dtype: 'int32'> tensor instead.

0 个答案:

没有答案