ValueError:“密度”输入的尺寸应定义。找不到`

时间:2019-12-18 00:54:52

标签: python python-3.x tensorflow keras tensorflow2.0

我一直在研究TensorFlow 2模型,但我经常遇到此错误。我试图定义每个图层的形状,但仍然没有变化。此外,仅当我在输入层中指定sparse=True时才会显示错误,由于输入张量稀疏并且脚本的其他部分需要它,因此必须指定此错误。 Tensorflow版本:Version: 2.0.0-beta1。如果我使用的版本比此版本新,则由于输入稀疏,还会出现其他晦涩的错误。令人惊讶的是,TF 2.0似乎对这种类型的输入有多少问题。

当前方法定义:

def make_feed_forward_model():
    #'''
    inputs = tf.keras.Input(shape=(HPARAMS.max_seq_length,),dtype='float32', name='sample', sparse=True)
    dense_layer_1 = tf.keras.layers.Dense(HPARAMS.num_fc_units, activation='relu')(inputs)
    dense_layer_2 = tf.keras.layers.Dense(HPARAMS.num_fc_units_2, activation='relu')(dense_layer_1)
    dense_layer_3 = tf.keras.layers.Dense(HPARAMS.num_fc_units_3, activation='relu')(dense_layer_2)
    outputs = tf.keras.layers.Dense(4, activation='softmax')(dense_layer_3)

    return tf.keras.Model(inputs=inputs, outputs=outputs)
    #'''

然后当我运行以下命令时,出现错误:

model = make_feed_forward_model()
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

跟踪:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-56-720f117bb231> in <module>
      1 # Feel free to use an architecture of your choice.
----> 2 model = make_feed_forward_model()
      3 model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

<ipython-input-55-5f35f6f22300> in make_feed_forward_model()
     18     #embedding_layer = tf.keras.layers.Embedding(HPARAMS.vocab_size, 16)(inputs)
     19     #pooling_layer = tf.keras.layers.GlobalAveragePooling1D()(inputs)
---> 20     dense_layer_1 = tf.keras.layers.Dense(HPARAMS.num_fc_units, activation='relu')(inputs)
     21     dense_layer_2 = tf.keras.layers.Dense(HPARAMS.num_fc_units_2, activation='relu')(dense_layer_1)
     22     dense_layer_3 = tf.keras.layers.Dense(HPARAMS.num_fc_units_3, activation='relu')(dense_layer_2)

~\Anaconda3\envs\tf-nsl\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in __call__(self, inputs, *args, **kwargs)
    614           # Build layer if applicable (if the `build` method has been
    615           # overridden).
--> 616           self._maybe_build(inputs)
    617 
    618           # Wrapping `call` function in autograph to allow for dynamic control

~\Anaconda3\envs\tf-nsl\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in _maybe_build(self, inputs)
   1964         # operations.
   1965         with tf_utils.maybe_init_scope(self):
-> 1966           self.build(input_shapes)
   1967       # We must set self.built since user defined build functions are not
   1968       # constrained to set self.built.

~\Anaconda3\envs\tf-nsl\lib\site-packages\tensorflow\python\keras\layers\core.py in build(self, input_shape)
   1003     input_shape = tensor_shape.TensorShape(input_shape)
   1004     if tensor_shape.dimension_value(input_shape[-1]) is None:
-> 1005       raise ValueError('The last dimension of the inputs to `Dense` '
   1006                        'should be defined. Found `None`.')
   1007     last_dim = tensor_shape.dimension_value(input_shape[-1])

ValueError: The last dimension of the inputs to `Dense` should be defined. Found `None`.

编辑:SparseTensor错误

看来,如果我使用的版本比TF 2.0.0-beta1还要新,培训完全会失败:

ValueError: The two structures don't have the same nested structure.

    First structure: type=TensorSpec str=TensorSpec(shape=(None, None), dtype=tf.float32, name=None)

    Second structure: type=SparseTensor str=SparseTensor(indices=Tensor("sample/indices_1:0", shape=(None, 2), dtype=int64), values=Tensor("sample/values_1:0", shape=(None,), dtype=float32), dense_shape=Tensor("sample/shape_1:0", shape=(2,), dtype=int64))

    More specifically: Substructure "type=SparseTensor str=SparseTensor(indices=Tensor("sample/indices_1:0", shape=(None, 2), dtype=int64), values=Tensor("sample/values_1:0", shape=(None,), dtype=float32), dense_shape=Tensor("sample/shape_1:0", shape=(2,), dtype=int64))" is a sequence, while substructure "type=TensorSpec str=TensorSpec(shape=(None, None), dtype=tf.float32, name=None)" is not
    Entire first structure:
    .
    Entire second structure:
    .

编辑2:将batch_size添加到Input层后出错

def make_feed_forward_model():  
    inputs = tf.keras.Input(shape=(HPARAMS.max_seq_length,),dtype='float32', name='sample', sparse=True, batch_size=HPARAMS.batch_size)
    dense_layer_1 = tf.keras.layers.Dense(HPARAMS.num_fc_units, activation='relu')(inputs)
    dense_layer_2 = tf.keras.layers.Dense(HPARAMS.num_fc_units_2, activation='relu')(dense_layer_1)
    dense_layer_3 = tf.keras.layers.Dense(HPARAMS.num_fc_units_3, activation='relu')(dense_layer_2)
    outputs = tf.keras.layers.Dense(4, activation='softmax')(dense_layer_3)

    return tf.keras.Model(inputs=inputs, outputs=outputs)
model = make_feed_forward_model()
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

当我运行model.compile()时:

TypeError: Failed to convert object of type <class 'tensorflow.python.framework.sparse_tensor.SparseTensor'> to Tensor. 

Contents: SparseTensor(indices=Tensor("sample/indices_3:0", shape=(None, 2), dtype=int64), values=Tensor("sample/values_3:0", shape=(None,), dtype=float32), dense_shape=Tensor("sample/shape_3:0", shape=(2,), dtype=int64)). Consider casting elements to a supported type.

1 个答案:

答案 0 :(得分:1)

之所以发生这种情况,是因为当输入张量稀疏时,该张量的形状为(None,None)而不是(HPARAMS.max_seq_length,)

inputs = tf.keras.Input(shape=(100,),dtype='float32', name='sample', sparse=True)
print(inputs.shape)
# output: (?, ?)

这似乎也是一个开放的issue
一种解决方案是编写自定义层子类化Layer类(请参阅this)。

作为解决方法(在tf-gpu 2.0.0上测试),在输入层中添加批处理大小可以正常工作:

inputs = tf.keras.Input(shape=(100,),dtype='float32', name='sample', sparse=True ,batch_size=32)
print(inputs.shape)
# output: (32, 100)