tf.parse_example用于具有序列数据序列的示例

时间:2017-08-20 15:01:18

标签: tensorflow tensorflow-serving google-cloud-ml-engine

我的Tensorflow模型为每个例子接收一系列序列数据,即一系列单词中的字符标记序列(例如[[3],[4,3],[6,1,20]] )。我之前通过填充3D numpy数组[batch_size,max_words_len,max_chars_len]并将其输入占位符来实现此目的。

in_question_chars = tf.placeholder(tf.int32, 
                                   [None, None, None], 
                                   name="in_question_chars")
# example of other data
in_question_words = tf.placeholder(tf.int32, 
                                   [None, None], 
                                   name="in_question_words")

但现在我想使用Google Cloud Machine Learning Engine进行在线预测/部署。基于Tensorflow服务的示例:https://github.com/tensorflow/serving/blob/master/tensorflow_serving/example/mnist_saved_model.py

我想出了类似的东西,但是我真的不知道该用于解析序列字符标记序列的功能:

serialized_tf_example = tf.placeholder(tf.string, name='tf_example')
feature_configs = {'in_question_chars':tf.FixedLenSequenceFeature(shape=[None], 
                                       allow_missing=True, 
                                       dtype=tf.int32, 
                                       default_value=0), 
                   'in_question_words':tf.FixedLenSequenceFeature(shape=[], 
                                       allow_missing=True, 
                                       dtype=tf.int32, 
                                       default_value=0)
                   }

tf_example = tf.parse_example(serialized_tf_example, feature_configs)

in_question_chars = tf.identity(tf_example['in_question_chars'], 
                                name='in_question_chars')
# example of other data
in_question_words = tf.identity(tf_example['in_question_words'], 
                                name='in_question_words')

我应该使用VarLenFeature,它将它变成一个SparseTensor(尽管它并不是真的稀疏),然后用tf.sparse_tensor_to_dense将它转换回密集?

对于下一步,我将获得每个char标记的嵌入。

in_question_char_repres = tf.nn.embedding_lookup(char_embedding, 
                                                 in_question_chars) 

所以另一个选择是保持SparseTensor然后使用tf.nn.embedding_lookup_sparse

我无法找到一个如何做到这一点的例子。请告诉我什么是最佳做法。谢谢!

编辑8/25/17

似乎不允许我为第二维设置无。

这是我的代码的精简版

def read_dataset(filename, mode=tf.contrib.learn.ModeKeys.TRAIN):  
    def _input_fn():
        num_epochs = MAX_EPOCHS if mode == tf.contrib.learn.ModeKeys.TRAIN else 1

        input_file_names = tf.train.match_filenames_once(str(filename))

        filename_queue = tf.train.string_input_producer(
            input_file_names, num_epochs=num_epochs, shuffle=True)
        reader = tf.TFRecordReader()
        _, serialized = reader.read_up_to(filename_queue, num_records=batch_size)

        features_spec = {
            CORRECT_CHILD_NODE_IDX: tf.FixedLenFeature(shape=[],
                                               dtype=tf.int64, 
                                               default_value=0),
            QUESTION_LENGTHS: tf.FixedLenFeature(shape=[], dtype=tf.int64),
            IN_QUESTION_WORDS: tf.FixedLenSequenceFeature(shape=[], 
                                                      allow_missing=True, 
                                                      dtype=tf.int64
                                                      ),
            QUESTION_CHAR_LENGTHS: tf.FixedLenSequenceFeature(shape=[], 
                                                          allow_missing=True, 
                                                          dtype=tf.int64
                                                          ),
            IN_QUESTION_CHARS: tf.FixedLenSequenceFeature(shape=[None], 
                                                      allow_missing=True, 
                                                      dtype=tf.int64
                                                      )
            }
        examples = tf.parse_example(serialized, features=features_spec)

        label = examples[CORRECT_CHILD_NODE_IDX]
        return examples, label   # dict of features, label
    return _input_fn

当我没有'没有'对于形状,它给了我这个错误:

    INFO:tensorflow:Using default config.
INFO:tensorflow:Using config: {'_task_type': None, '_task_id': 0, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f57fc309c18>, '_master': '', '_num_ps_replicas': 0, '_num_worker_replicas': 0, '_environment': 'local', '_is_chief': True, '_evaluation_master': '', '_tf_config': gpu_options {
  per_process_gpu_memory_fraction: 1.0
}
, '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_secs': 600, '_log_step_count_steps': 100, '_session_config': None, '_save_checkpoints_steps': None, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_model_dir': 'outputdir'}
WARNING:tensorflow:From /home/jupyter-admin/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/monitors.py:269: BaseMonitor.__init__ (from tensorflow.contrib.learn.python.learn.monitors) is deprecated and will be removed after 2016-12-05.
Instructions for updating:
Monitors are deprecated. Please use tf.train.SessionRunHook.
---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
/home/jupyter-admin/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/common_shapes.py in _call_cpp_shape_fn_impl(op, input_tensors_needed, input_tensors_as_shapes_needed, require_shape_fn)
    653           graph_def_version, node_def_str, input_shapes, input_tensors,
--> 654           input_tensors_as_shapes, status)
    655   except errors.InvalidArgumentError as err:

/home/jupyter-admin/anaconda3/lib/python3.6/contextlib.py in __exit__(self, type, value, traceback)
     88             try:
---> 89                 next(self.gen)
     90             except StopIteration:

/home/jupyter-admin/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py in raise_exception_on_not_ok_status()
    465           compat.as_text(pywrap_tensorflow.TF_Message(status)),
--> 466           pywrap_tensorflow.TF_GetCode(status))
    467   finally:

InvalidArgumentError: dense_shapes[2] has unknown rank or unknown inner dimensions: [?,?] for 'ParseExample/ParseExample' (op: 'ParseExample') with input shapes: [?], [0], [], [], [], [], [], [], [], [], [], [0], [1], [], [], [0], [], [0], [0], [0].

During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
<ipython-input-45-392858a0e7b4> in <module>()
     48 
     49 shutil.rmtree('outputdir', ignore_errors=True) # start fresh each time
---> 50 learn_runner.run(experiment_fn, 'outputdir')

/home/jupyter-admin/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/learn_runner.py in run(experiment_fn, output_dir, schedule, run_config, hparams)
    207   schedule = schedule or _get_default_schedule(run_config)
    208 
--> 209   return _execute_schedule(experiment, schedule)
    210 
    211 

/home/jupyter-admin/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/learn_runner.py in _execute_schedule(experiment, schedule)
     44     logging.error('Allowed values for this experiment are: %s', valid_tasks)
     45     raise TypeError('Schedule references non-callable member %s' % schedule)
---> 46   return task()
     47 
     48 

/home/jupyter-admin/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/experiment.py in train_and_evaluate(self)
    500             name=eval_dir_suffix, hooks=self._eval_hooks
    501         )]
--> 502       self.train(delay_secs=0)
    503 
    504     eval_result = self._call_evaluate(input_fn=self._eval_input_fn,

/home/jupyter-admin/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/experiment.py in train(self, delay_secs)
    278     return self._call_train(input_fn=self._train_input_fn,
    279                             max_steps=self._train_steps,
--> 280                             hooks=self._train_monitors + extra_hooks)
    281 
    282   def evaluate(self, delay_secs=None, name=None):

/home/jupyter-admin/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/experiment.py in _call_train(self, _sentinel, input_fn, steps, hooks, max_steps)
    675                                  steps=steps,
    676                                  max_steps=max_steps,
--> 677                                  monitors=hooks)
    678 
    679   def _call_evaluate(self, _sentinel=None,  # pylint: disable=invalid-name,

/home/jupyter-admin/anaconda3/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py in new_func(*args, **kwargs)
    294               'in a future version' if date is None else ('after %s' % date),
    295               instructions)
--> 296       return func(*args, **kwargs)
    297     return tf_decorator.make_decorator(func, new_func, 'deprecated',
    298                                        _add_deprecated_arg_notice_to_docstring(

/home/jupyter-admin/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py in fit(self, x, y, input_fn, steps, batch_size, monitors, max_steps)
    456       hooks.append(basic_session_run_hooks.StopAtStepHook(steps, max_steps))
    457 
--> 458     loss = self._train_model(input_fn=input_fn, hooks=hooks)
    459     logging.info('Loss for final step: %s.', loss)
    460     return self

/home/jupyter-admin/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py in _train_model(self, input_fn, hooks)
    954       random_seed.set_random_seed(self._config.tf_random_seed)
    955       global_step = contrib_framework.create_global_step(g)
--> 956       features, labels = input_fn()
    957       self._check_inputs(features, labels)
    958       model_fn_ops = self._get_train_ops(features, labels)

<ipython-input-44-fdb63ed72b90> in _input_fn()
     35                                                           )
     36             }
---> 37         examples = tf.parse_example(serialized, features=features_spec)
     38 
     39         label = examples[CORRECT_CHILD_NODE_IDX]

/home/jupyter-admin/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/parsing_ops.py in parse_example(serialized, features, name, example_names)
    573   outputs = _parse_example_raw(
    574       serialized, example_names, sparse_keys, sparse_types, dense_keys,
--> 575       dense_types, dense_defaults, dense_shapes, name)
    576   return _construct_sparse_tensors_for_sparse_features(features, outputs)
    577 

/home/jupyter-admin/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/parsing_ops.py in _parse_example_raw(serialized, names, sparse_keys, sparse_types, dense_keys, dense_types, dense_defaults, dense_shapes, name)
    698         dense_keys=dense_keys,
    699         dense_shapes=dense_shapes,
--> 700         name=name)
    701     # pylint: enable=protected-access
    702 

/home/jupyter-admin/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_parsing_ops.py in _parse_example(serialized, names, sparse_keys, dense_keys, dense_defaults, sparse_types, dense_shapes, name)
    174                                 dense_defaults=dense_defaults,
    175                                 sparse_types=sparse_types,
--> 176                                 dense_shapes=dense_shapes, name=name)
    177   return _ParseExampleOutput._make(result)
    178 

/home/jupyter-admin/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py in apply_op(self, op_type_name, name, **keywords)
    765         op = g.create_op(op_type_name, inputs, output_types, name=scope,
    766                          input_types=input_types, attrs=attr_protos,
--> 767                          op_def=op_def)
    768         if output_structure:
    769           outputs = op.outputs

/home/jupyter-admin/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in create_op(self, op_type, inputs, dtypes, input_types, name, attrs, op_def, compute_shapes, compute_device)
   2630                     original_op=self._default_original_op, op_def=op_def)
   2631     if compute_shapes:
-> 2632       set_shapes_for_outputs(ret)
   2633     self._add_op(ret)
   2634     self._record_op_seen_by_control_dependencies(ret)

/home/jupyter-admin/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in set_shapes_for_outputs(op)
   1909       shape_func = _call_cpp_shape_fn_and_require_op
   1910 
-> 1911   shapes = shape_func(op)
   1912   if shapes is None:
   1913     raise RuntimeError(

/home/jupyter-admin/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py in call_with_requiring(op)
   1859 
   1860   def call_with_requiring(op):
-> 1861     return call_cpp_shape_fn(op, require_shape_fn=True)
   1862 
   1863   _call_cpp_shape_fn_and_require_op = call_with_requiring

/home/jupyter-admin/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/common_shapes.py in call_cpp_shape_fn(op, require_shape_fn)
    593     res = _call_cpp_shape_fn_impl(op, input_tensors_needed,
    594                                   input_tensors_as_shapes_needed,
--> 595                                   require_shape_fn)
    596     if not isinstance(res, dict):
    597       # Handles the case where _call_cpp_shape_fn_impl calls unknown_shape(op).

/home/jupyter-admin/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/common_shapes.py in _call_cpp_shape_fn_impl(op, input_tensors_needed, input_tensors_as_shapes_needed, require_shape_fn)
    657       missing_shape_fn = True
    658     else:
--> 659       raise ValueError(err.message)
    660 
    661   if missing_shape_fn:

ValueError: dense_shapes[2] has unknown rank or unknown inner dimensions: [?,?] for 'ParseExample/ParseExample' (op: 'ParseExample') with input shapes: [?], [0], [], [], [], [], [], [], [], [], [], [0], [1], [], [], [0], [], [0], [0], [0].

目前,我通过将第二个维度设置为max_char_length,然后将其连接成一个1d数组,将序列的2D序列转换为1D序列来解决这个问题。所以我只保留第一个max_char_length char,如果它长于max_char_length,或者如果它更短,则用零填充它。这似乎有效,但也许有一种方法可以接受第二维的可变长度序列,并在tf.parse_example或tf.train.batch中进行填充。

1 个答案:

答案 0 :(得分:3)

编辑:修复混乱/错误答案=)

所以你想要的是使用tf.SequenceExample而不是tf.parse_example的{​​{3}}。这允许您将示例中的feature_list中的每个要素都作为序列的一部分,在这种情况下,每个Feature可以是VarLenFeature,表示单词中的字符数。不幸的是,当你想要传递多个句子时,这种方式也不行。因此,我们必须使用更高阶函数和tf.sparse_concat

进行一些黑客攻击

我在这里制作了一个测试程序:tf.parse_single_sequence_example

输入(在序列化为一批SequenceExamples之前)看起来像:

[[[5, 10], [5, 10, 20]],
 [[0, 1, 2], [2, 1, 0], [0, 1, 2, 3]]]

结果SparseTensor看起来像:

SparseTensorValue(indices=array([[[0, 0, 0],
    [0, 0, 1],
    [0, 1, 0],
    [0, 1, 1],
    [0, 1, 2],
    [1, 0, 0],
    [1, 0, 1],
    [1, 0, 2],
    [1, 1, 0],
    [1, 1, 1],
    [1, 1, 2],
    [1, 2, 0],
    [1, 2, 1],
    [1, 2, 2],
    [1, 2, 3]]]), values=array([[ 5, 10,  5, 10, 20,  0,  1,  2,  2,  1,  0,  0,  1,  2,  3]]), dense_shape=array([[2, 3, 4]]))

这似乎是一个SparseTensor index=[sentence, word, letter]