Tensorflow Estimator中的TFHub嵌入功能列

时间:2018-08-23 09:51:00

标签: python tensorflow keras tensorflow-estimator tensorflow-hub

我不知道如何在转换为hub.text_embedding_column的Keras模型中使用Tensorflow Hub嵌入列(tf.Estimator)。

如果不将模型转换为估计量,则可以在Keras模型中使用嵌入。

例如,将一些伪数据定义为:

x_train = ['the quick brown fox', 'jumps over a lazy']
x_eval = ['the quick brown fox', 'jumps over a lazy']
y_train = [0, 1]
y_eval = [0, 1]

然后,我可以使用以下代码来正确无误地训练keras模型

embed = hub.Module('https://tfhub.dev/google/nnlm-en-dim128/1')
def _embed(x):
    return embed(tf.squeeze(tf.cast(x, tf.string)))

# workaround for keras
x_train = np.array(x_train, dtype=object)[:, np.newaxis]
x_eval = np.array(x_eval, dtype=object)[:, np.newaxis]

input_text = tf.keras.layers.Input(shape=(1,), dtype=tf.string)
embedding = tf.keras.layers.Lambda(_embed, output_shape=(128,))(input_text)
pred = tf.keras.layers.Dense(1, activation='sigmoid')(dense)
model = tf.keras.Model(inputs=input_text, outputs=pred)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()

with tf.Session() as sess:
    sess.run([tf.global_variables_initializer(), tf.tables_initializer()])
    model.fit(x_train, y_train, epochs=1, validation_data=(x_eval, y_eval))

但是,如果我尝试使用tf.keras.estimator.model_to_estimator将其转换为估计量,突然之间我将无法再训练模型。

embedding = hub.text_embedding_column('text', 'https://tfhub.dev/google/nnlm-en-dim128/1')
features = {'text': x_train}
labels = y_train[:, np.newaxis]

input_fn = tf.estimator.inputs.numpy_input_fn(features, labels, shuffle=False)

embedding_input = tf.keras.layers.Input(shape=(128,), dtype=tf.float32, name='text')
logits = tf.keras.layers.Dense(1, activation='softmax', name='logits')(embedding_input)
model = tf.keras.Model(inputs=embedding_input, outputs=logits)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()

estimator = tf.keras.estimator.model_to_estimator(model)

estimator.train(input_fn, max_steps=1)

如果我使用tf.estimator.DNNEstimator之类的固定估计量,那么我也可以训练模型而不会出错。

embedding = hub.text_embedding_column('text', 'https://tfhub.dev/google/nnlm-en-dim128/1')
features = {'text': x_train}
labels = y_train[:, np.newaxis]

input_fn = tf.estimator.inputs.numpy_input_fn(features, labels, shuffle=False)
estimator = tf.estimator.DNNClassifier([32], [embedding])

当我尝试将keras模型转换为estimator进行训练时,出现的错误是:

Input 0 of layer logits is incompatible with the layer: : expected min_ndim=2, found ndim=1. Full shape received: [None]

完整的堆栈跟踪如下:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-15-f1d8a31726e2> in <module>()
     22 estimator = tf.keras.estimator.model_to_estimator(model)
     23 
---> 24 estimator.train(input_fn, max_steps=1)

.../anaconda2/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.pyc in train(self, input_fn, hooks, steps, max_steps, saving_listeners)
    374 
    375       saving_listeners = _check_listeners_type(saving_listeners)
--> 376       loss = self._train_model(input_fn, hooks, saving_listeners)
    377       logging.info('Loss for final step: %s.', loss)
    378       return self

.../anaconda2/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.pyc in _train_model(self, input_fn, hooks, saving_listeners)
   1143       return self._train_model_distributed(input_fn, hooks, saving_listeners)
   1144     else:
-> 1145       return self._train_model_default(input_fn, hooks, saving_listeners)
   1146 
   1147   def _train_model_default(self, input_fn, hooks, saving_listeners):

.../anaconda2/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.pyc in _train_model_default(self, input_fn, hooks, saving_listeners)
   1168       worker_hooks.extend(input_hooks)
   1169       estimator_spec = self._call_model_fn(
-> 1170           features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
   1171       return self._train_with_estimator_spec(estimator_spec, worker_hooks,
   1172                                              hooks, global_step_tensor,

.../anaconda2/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.pyc in _call_model_fn(self, features, labels, mode, config)
   1131 
   1132     logging.info('Calling model_fn.')
-> 1133     model_fn_results = self._model_fn(features=features, **kwargs)
   1134     logging.info('Done calling model_fn.')
   1135 

.../anaconda2/lib/python2.7/site-packages/tensorflow/python/estimator/keras.pyc in model_fn(features, labels, mode)
    357     """model_fn for keras Estimator."""
    358     model = _clone_and_build_model(mode, keras_model, custom_objects, features,
--> 359                                    labels)
    360     model_output_names = []
    361     # We need to make sure that the output names of the last layer in the model

.../anaconda2/lib/python2.7/site-packages/tensorflow/python/estimator/keras.pyc in _clone_and_build_model(mode, keras_model, custom_objects, features, labels)
    313         model = models.clone_model(keras_model, input_tensors=input_tensors)
    314     else:
--> 315       model = models.clone_model(keras_model, input_tensors=input_tensors)
    316   else:
    317     model = keras_model

.../anaconda2/lib/python2.7/site-packages/tensorflow/python/keras/models.pyc in clone_model(model, input_tensors)
    261     return _clone_sequential_model(model, input_tensors=input_tensors)
    262   else:
--> 263     return _clone_functional_model(model, input_tensors=input_tensors)

.../anaconda2/lib/python2.7/site-packages/tensorflow/python/keras/models.pyc in _clone_functional_model(model, input_tensors)
    154               kwargs['mask'] = computed_mask
    155           output_tensors = generic_utils.to_list(layer(computed_tensor,
--> 156                                                        **kwargs))
    157           output_masks = generic_utils.to_list(
    158               layer.compute_mask(computed_tensor, computed_mask))

.../anaconda2/lib/python2.7/site-packages/tensorflow/python/keras/engine/base_layer.pyc in __call__(self, inputs, *args, **kwargs)
    718 
    719         # Check input assumptions set before layer building, e.g. input rank.
--> 720         self._assert_input_compatibility(inputs)
    721         if input_list and self._dtype is None:
    722           try:

.../anaconda2/lib/python2.7/site-packages/tensorflow/python/keras/engine/base_layer.pyc in _assert_input_compatibility(self, inputs)
   1438                            ', found ndim=' + str(ndim) +
   1439                            '. Full shape received: ' +
-> 1440                            str(x.shape.as_list()))
   1441       # Check dtype.
   1442       if spec.dtype is not None:

ValueError: Input 0 of layer logits is incompatible with the layer: : expected min_ndim=2, found ndim=1. Full shape received: [None]

1 个答案:

答案 0 :(得分:0)

我终于设法弄清楚如何将model_to_estimator与TFHub一起使用。您需要在Keras模型外部进行嵌入。您的Keras模型必须将嵌入作为输入,而不是在模型中处理嵌入。但是,您可以将Keras模型用作估算器函数中的函数。

例如,您可以定义一个接受预先计算的嵌入的Keras模型(对于此示例,我想让嵌入返回一个序列而不是单个平均嵌入,因此输入形状具有序列长度):

import tensorflow as tf
import tensorflow_hub as hub
import numpy as np
import shutil

def create_model(max_seq_len, embedding_size):
    model = tf.keras.Sequential()
    model.add(tf.keras.layers.Dropout(0.5, input_shape=(max_seq_len, embedding_size)))
    model.add(tf.keras.layers.SeparableConv1D(8, 3, padding='same', activation=tf.nn.leaky_relu))
    model.add(tf.keras.layers.GlobalAveragePooling1D())
    model.add(tf.keras.layers.Dense(2, activation='softmax'))
    return model

您将定义一个估计器模型函数,例如:

,而不是先编译此模型,然后再使用model_to_estimator
def model_fn(features, labels, mode, params):

    model = create_model(5, 128)

    embed = hub.Module(...)
    text_seq = pad_seq(features['text'], 5)
    embeddings = tf.map_fn(embed, text_seq)

    if mode == tf.estimator.ModeKeys.TRAIN:
        logits = model(embeddings, training=True)

     # some more logic

像这样调用Keras模型,即可从模型中计算出logit。然后,您可以返回tf.estimator.EstimatorSpec创建一个Estimatorm abd,然后从那里进行训练。

您可以参考the Tensorflow MNIST example来了解它们如何将Tensorflow计算围绕Keras模型包装,以创建估算器模型函数,然后创建估算器,即使他们没有使用TFHub的任何东西。