ValueError:无法将dtype资源的张量转换为NumPy数组

时间:2020-01-29 07:44:59

标签: python tensorflow keras tensorflow2.0

我试图通过使用参数矩阵来隔离一些用户特定的参数,其中每个数组将学习该用户特定的参数。

我想使用用户ID为矩阵建立索引,并将参数连接到其他功能。

最后,有一些完全连接的层才能获得理想的结果。

但是,我在代码的最后一行不断收到此错误。


---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-1-93de3591ccf0> in <module>
     20 # combined = tf.keras.layers.Concatenate(axis=-1)([le_param, le])
     21 
---> 22 net = tf.keras.layers.Dense(128)(combined)

~/anaconda3/envs/tam-env/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer.py in __call__(self, inputs, *args, **kwargs)
    793     # framework.
    794     if build_graph and base_layer_utils.needs_keras_history(inputs):
--> 795       base_layer_utils.create_keras_history(inputs)
    796 
    797     # Clear eager losses on top level model call.

~/anaconda3/envs/tam-env/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer_utils.py in create_keras_history(tensors)
    182     keras_tensors: The Tensors found that came from a Keras Layer.
    183   """
--> 184   _, created_layers = _create_keras_history_helper(tensors, set(), [])
    185   return created_layers
    186 

~/anaconda3/envs/tam-env/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer_utils.py in _create_keras_history_helper(tensors, processed_ops, created_layers)
    229               constants[i] = backend.function([], op_input)([])
    230       processed_ops, created_layers = _create_keras_history_helper(
--> 231           layer_inputs, processed_ops, created_layers)
    232       name = op.name
    233       node_def = op.node_def.SerializeToString()

~/anaconda3/envs/tam-env/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer_utils.py in _create_keras_history_helper(tensors, processed_ops, created_layers)
    229               constants[i] = backend.function([], op_input)([])
    230       processed_ops, created_layers = _create_keras_history_helper(
--> 231           layer_inputs, processed_ops, created_layers)
    232       name = op.name
    233       node_def = op.node_def.SerializeToString()

~/anaconda3/envs/tam-env/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/base_layer_utils.py in _create_keras_history_helper(tensors, processed_ops, created_layers)
    227           else:
    228             with ops.init_scope():
--> 229               constants[i] = backend.function([], op_input)([])
    230       processed_ops, created_layers = _create_keras_history_helper(
    231           layer_inputs, processed_ops, created_layers)

~/anaconda3/envs/tam-env/lib/python3.6/site-packages/tensorflow_core/python/keras/backend.py in __call__(self, inputs)
   3746     return nest.pack_sequence_as(
   3747         self._outputs_structure,
-> 3748         [x._numpy() for x in outputs],  # pylint: disable=protected-access
   3749         expand_composites=True)
   3750 

~/anaconda3/envs/tam-env/lib/python3.6/site-packages/tensorflow_core/python/keras/backend.py in <listcomp>(.0)
   3746     return nest.pack_sequence_as(
   3747         self._outputs_structure,
-> 3748         [x._numpy() for x in outputs],  # pylint: disable=protected-access
   3749         expand_composites=True)
   3750 

ValueError: Cannot convert a Tensor of dtype resource to a NumPy array.

重现该错误的代码:

import tensorflow as tf

num_uids = 50
input_uid = tf.keras.layers.Input(shape=(1,), dtype=tf.int32)
params = tf.Variable(tf.random.normal((num_uids, 9)), trainable=True)

param = tf.gather_nd(params, input_uid)

input_shared_features = tf.keras.layers.Input(shape=(128,), dtype=tf.float32)
combined = tf.concat([param, input_shared_features], axis=-1)

net = tf.keras.layers.Dense(128)(combined)

我尝试了几件事:

  1. 我尝试使用tf.keras.layers.Lambda封装tf.gather_nd和tf.concat。
  2. 我尝试用tf.keras.layers.Concatenate替换tf.concat。

奇怪的是,如果我指定项目数并将Input替换为tf.Variable,代码将按预期工作:

import tensorflow as tf

num_uids = 50
input_uid = tf.Variable(tf.ones((32, 1), dtype=tf.int32))
params = tf.Variable(tf.random.normal((num_uids, 9)), trainable=True)

param = tf.gather_nd(params, input_uid)

input_shared_features = tf.Variable(tf.ones((32, 128), dtype=tf.float32))
combined = tf.concat([param, input_shared_features], axis=-1)

net = tf.keras.layers.Dense(128)(combined)

我正在将Tensorflow 2.1与Python 3.6.10一起使用

2 个答案:

答案 0 :(得分:2)

当我尝试在TensorFlow 2.x中使用TensorFlow表查找(tf.lookup.StaticHashTable)时遇到了类似的问题。我最终通过将其保存在Custom Keras Layer中来解决了该问题。相同的解决方案似乎也可以解决该问题,至少在问题中提到的版本之前都是如此。 (我尝试使用TensorFlow 2.0、2.1和2.2,并且在所有这些版本中都可以使用。)

import tensorflow as tf

num_uids = 50
input_uid = tf.keras.Input(shape=(1,), dtype=tf.int32)
input_shared_features = tf.keras.layers.Input(shape=(128,), dtype=tf.float32)

class CustomLayer(tf.keras.layers.Layer):
    def __init__(self,num_uids):
        super(CustomLayer, self).__init__(trainable=True,dtype=tf.int64)
        self.num_uids = num_uids

    def build(self,input_shape):
        self.params = tf.Variable(tf.random.normal((num_uids, 9)), trainable=True)
        self.built=True

    def call(self, input_uid,input_shared_features):
        param = tf.gather_nd(self.params, input_uid)
        combined = tf.concat([param, input_shared_features], axis=-1)
        return combined

    def get_config(self):
        config = super(CustomLayer, self).get_config()
        config.update({'num_uids': self.num_uids})
        return config

combined = CustomLayer(num_uids)(input_uid,input_shared_features)
net = tf.keras.layers.Dense(128)(combined)
model = tf.keras.Model(inputs={'input_uid':input_uid,'input_shared_features':input_shared_features},outputs=net)
model.summary()

以下是模型摘要:

Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 1)]          0                                            
__________________________________________________________________________________________________
input_2 (InputLayer)            [(None, 128)]        0                                            
__________________________________________________________________________________________________
custom_layer (CustomLayer)      (None, 137)          450         input_1[0][0]                    
__________________________________________________________________________________________________
dense (Dense)                   (None, 128)          17664       custom_layer[0][0]               
==================================================================================================
Total params: 18,114
Trainable params: 18,114
Non-trainable params: 0

有关更多信息,请参阅tf.keras.layers.Layer documentation

如果要参考表查找问题和解决方案,请参见以下链接:

答案 1 :(得分:1)

尽管Jithin Jees的答案非常清楚,但下面显示的是使用Concatenate操作的变通办法,

import tensorflow as tf

num_uids = 50
#input_uid = tf.keras.layers.Input(shape=(1,), dtype=tf.int32, batch_size = 32)
#input_uid = tf.keras.layers.Input(shape=(1,), dtype=tf.int32)
#params = tf.Variable(tf.random.normal((num_uids, 9)), trainable=True)

#param = tf.gather_nd(params, input_uid)

indices = tf.keras.layers.Input(name='indices', shape=(), dtype='int32')
params = tf.Variable(params)

class GatherLayer(tf.keras.layers.Layer):
    def call(self, indices, params):
        return tf.gather(params, indices)

output = GatherLayer()(indices, params)

#input_shared_features = tf.keras.layers.Input(shape=(128,), dtype=tf.float32, batch_size = 32)
input_shared_features = tf.keras.layers.Input(shape=(128,), dtype=tf.float32)
combined = tf.concat([output, input_shared_features], axis=-1)

net = tf.keras.layers.Dense(128)(combined)

有关更多详细信息,请参阅此Github Issue