如何将onsets_frames_transcription检查点导出到TFServing的PB模型?

时间:2020-10-15 09:52:15

标签: tensorflow deep-learning tensorflow-serving magenta

我想使用onsets_frames_transcription进行投放,但是音频示例原型的预处理与data.provide_batch有关,并且它返回一个数据集对象。

def provide_batch(examples,
                  preprocess_examples,
                  params,
                  is_training,
                  shuffle_examples,
                  skip_n_initial_records):
  """Returns batches of tensors read from TFRecord files.
  Args:
    examples: A string path to a TFRecord file of examples, a python list of
      serialized examples, or a Tensor placeholder for serialized examples.
    preprocess_examples: Whether to preprocess examples. If False, assume they
      have already been preprocessed.
    params: HParams object specifying hyperparameters. Called 'params' here
      because that is the interface that TPUEstimator expects.
    is_training: Whether this is a training run.
    shuffle_examples: Whether examples should be shuffled.
    skip_n_initial_records: Skip this many records at first.
  Returns:
    Batched tensors in a TranscriptionData NamedTuple.
  """
  hparams = params

  input_dataset = read_examples(
      examples, is_training, shuffle_examples, skip_n_initial_records, hparams)

  if preprocess_examples:
    input_map_fn = functools.partial(
        preprocess_example, hparams=hparams, is_training=is_training)
  else:
    input_map_fn = parse_preprocessed_example
  input_tensors = input_dataset.map(input_map_fn)

  model_input = input_tensors.map(
      functools.partial(
          input_tensors_to_model_input,
          hparams=hparams, is_training=is_training))

  model_input = splice_examples(model_input, hparams, is_training)
  dataset = create_batch(model_input, hparams=hparams, is_training=is_training)
  return dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)

如何使用tf.estimator.export将模型检查点导出到PB模型,并包含所有这些预处理过程,并创建serve_input_receiver?服务中是否可以使用任何功能对tf.example进行预处理?

0 个答案:

没有答案