简单的LSTM模型:名称错误中没有名为“ _XlaCompile”的属性

时间:2018-08-23 16:07:12

标签: python python-3.x tensorflow keras

我对机器学习非常陌生,在尝试创建简单的LSTM模型时遇到了一个错误,而且我绝对不知道如何调试它。我正在使用Keras版本2.2.2。 我的代码大致如下所示:

model = Sequential()
model.add(Embedding(400001, emb_dim, trainable=False, input_length = 56, weights = [emb_matrix]))
model.add(LSTM(128, return_sequences=False))
model.add(Dense(5, activation='softmax'))
model.summary()
model.fit(train_in, train_out, epochs = 50, batch_size = 32, shuffle=True)

我的输入最初是我打算进行情感分析的句子列表,然后我使用50暗淡的Glove向量将句子转换为形状(样本大小为56、50)的向量,因为我的单词数最大每句话是56个(偏高吗?)。

我的模型摘要:

Layer (type)                 Output Shape              Param #   
=================================================================
embedding_5 (Embedding)      (None, 56, 50)            20000050  
_________________________________________________________________
lstm_6 (LSTM)                (None, 128)               91648     
_________________________________________________________________
dense_4 (Dense)              (None, 5)                 645       
=================================================================
Total params: 20,092,343
Trainable params: 92,293
Non-trainable params: 20,000,050

我的输入:

print(train_in.shape, train_out.shape)
>(156060, 56) (156060, 5)
emb_matrix.shape
>(400001, 50)
print(train_in.dtype, train_out.dtype, emb_matrix.dtype)
>float32 float32 float32

最后是我的错误消息:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\ops\gradients_impl.py in _MaybeCompile(scope, op, func, grad_fn)
    369     try:
--> 370       xla_compile = op.get_attr("_XlaCompile")
    371       xla_separate_compiled_gradients = op.get_attr(

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\framework\ops.py in get_attr(self, name)
   2172         raise ValueError(
-> 2173             "No attr named '" + name + "' in " + str(self._node_def))
   2174       x = self._node_def.attr[name]

ValueError: No attr named '_XlaCompile' in name: "lstm_6/while/TensorArrayWrite/TensorArrayWriteV3"
op: "TensorArrayWriteV3"
input: "lstm_6/while/TensorArrayWrite/TensorArrayWriteV3/Enter"
input: "lstm_6/while/Identity_1"
input: "lstm_6/while/mul_5"
input: "lstm_6/while/Identity_2"
attr {
  key: "T"
  value {
    type: DT_FLOAT
  }
}
attr {
  key: "_class"
  value {
    list {
      s: "loc:@lstm_6/while/mul_5"
    }
  }
}


During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\framework\op_def_library.py in _apply_op_helper(self, op_type_name, name, **keywords)
    509                 as_ref=input_arg.is_ref,
--> 510                 preferred_dtype=default_dtype)
    511           except TypeError as err:

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\framework\ops.py in internal_convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, ctx)
   1021     if ret is None:
-> 1022       ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
   1023 

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\framework\ops.py in _TensorTensorConversionFunction(t, dtype, name, as_ref)
    865         "Tensor conversion requested dtype %s for Tensor with dtype %s: %r" %
--> 866         (dtype.name, t.dtype.name, str(t)))
    867   return t

ValueError: Tensor conversion requested dtype int32 for Tensor with dtype int64: 'Tensor("lstm_6/while/maximum_iterations:0", shape=(), dtype=int64)'

During handling of the above exception, another exception occurred:

TypeError                                 Traceback (most recent call last)
<ipython-input-54-936a1189c2d5> in <module>()
----> 1 model.fit(train_in, train_out, epochs = 50, batch_size = 32, shuffle=True)

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
   1006         else:
   1007             ins = x + y + sample_weights
-> 1008         self._make_train_function()
   1009         f = self.train_function
   1010 

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\keras\engine\training.py in _make_train_function(self)
    496                     training_updates = self.optimizer.get_updates(
    497                         params=self._collected_trainable_weights,
--> 498                         loss=self.total_loss)
    499                 updates = (self.updates +
    500                            training_updates +

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\keras\legacy\interfaces.py in wrapper(*args, **kwargs)
     89                 warnings.warn('Update your `' + object_name +
     90                               '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 91             return func(*args, **kwargs)
     92         wrapper._original_function = func
     93         return wrapper

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\keras\optimizers.py in get_updates(self, loss, params)
    633     @interfaces.legacy_get_updates_support
    634     def get_updates(self, loss, params):
--> 635         grads = self.get_gradients(loss, params)
    636         self.updates = [K.update_add(self.iterations, 1)]
    637 

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\keras\optimizers.py in get_gradients(self, loss, params)
     87 
     88     def get_gradients(self, loss, params):
---> 89         grads = K.gradients(loss, params)
     90         if None in grads:
     91             raise ValueError('An operation has `None` for gradient. '

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\keras\backend\tensorflow_backend.py in gradients(loss, variables)
   2706         A gradients tensor.
   2707     """
-> 2708     return tf.gradients(loss, variables, colocate_gradients_with_ops=True)
   2709 
   2710 

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\ops\gradients_impl.py in gradients(ys, xs, grad_ys, name, colocate_gradients_with_ops, gate_gradients, aggregation_method, stop_gradients)
    607                 # functions.
    608                 in_grads = _MaybeCompile(
--> 609                     grad_scope, op, func_call, lambda: grad_fn(op, *out_grads))
    610               else:
    611                 # For function call ops, we add a 'SymbolicGradient'

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\ops\gradients_impl.py in _MaybeCompile(scope, op, func, grad_fn)
    373       xla_scope = op.get_attr("_XlaScope").decode()
    374     except ValueError:
--> 375       return grad_fn()  # Exit early
    376 
    377   if not xla_compile:

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\ops\gradients_impl.py in <lambda>()
    607                 # functions.
    608                 in_grads = _MaybeCompile(
--> 609                     grad_scope, op, func_call, lambda: grad_fn(op, *out_grads))
    610               else:
    611                 # For function call ops, we add a 'SymbolicGradient'

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\ops\tensor_array_grad.py in _TensorArrayWriteGrad(op, flow)
    129                                     colocate_with_first_write_call=False)
    130        .grad(source=grad_source, flow=flow))
--> 131   grad = g.read(index)
    132   return [None, None, grad, flow]
    133 

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\ops\tensor_array_ops.py in read(self, index, name)
    857       The tensor at index `index`.
    858     """
--> 859     return self._implementation.read(index, name=name)
    860 
    861   @tf_should_use.should_use_result

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\ops\tensor_array_ops.py in read(self, index, name)
    257         flow_in=self._flow,
    258         dtype=self._dtype,
--> 259         name=name)
    260     if self._element_shape:
    261       value.set_shape(self._element_shape[0].dims)

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\ops\gen_data_flow_ops.py in _tensor_array_read_v3(handle, index, flow_in, dtype, name)
   4993     _, _, _op = _op_def_lib._apply_op_helper(
   4994         "TensorArrayReadV3", handle=handle, index=index, flow_in=flow_in,
-> 4995         dtype=dtype, name=name)
   4996     _result = _op.outputs[:]
   4997     _inputs_flat = _op.inputs

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\framework\op_def_library.py in _apply_op_helper(self, op_type_name, name, **keywords)
    785         op = g.create_op(op_type_name, inputs, output_types, name=scope,
    786                          input_types=input_types, attrs=attr_protos,
--> 787                          op_def=op_def)
    788       return output_structure, op_def.is_stateful, op
    789 

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\framework\ops.py in create_op(self, op_type, inputs, dtypes, input_types, name, attrs, op_def, compute_shapes, compute_device)
   3158         input_types=input_types,
   3159         original_op=self._default_original_op,
-> 3160         op_def=op_def)
   3161     self._create_op_helper(ret, compute_shapes=compute_shapes,
   3162                            compute_device=compute_device)

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\framework\ops.py in __init__(self, node_def, g, inputs, output_types, control_inputs, input_types, original_op, op_def)
   1672       control_flow_util.CheckInputFromValidContext(self, input_tensor.op)
   1673     if self._control_flow_context is not None:
-> 1674       self._control_flow_context.AddOp(self)
   1675     self._recompute_node_def()
   1676 

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\ops\control_flow_ops.py in AddOp(self, op)
   2249             op_input_ctxt._AddOpInternal(op)
   2250             return
-> 2251     self._AddOpInternal(op)
   2252 
   2253   def _AddOpInternal(self, op):

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\ops\control_flow_ops.py in _AddOpInternal(self, op)
   2272       for index in range(len(op.inputs)):
   2273         x = op.inputs[index]
-> 2274         real_x = self.AddValue(x)
   2275         if real_x != x:
   2276           op._update_input(index, real_x)

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\ops\control_flow_ops.py in AddValue(self, val)
   2205               forward_ctxt = forward_ctxt.GetWhileContext()
   2206           if forward_ctxt == grad_ctxt.grad_state.forward_context:
-> 2207             real_val = grad_ctxt.grad_state.GetRealValue(val)
   2208             self._external_values[val.name] = real_val
   2209             return real_val

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\ops\control_flow_ops.py in GetRealValue(self, value)
   1048           # Record the history of this value in forward_ctxt.
   1049           self._grad_context.Exit()
-> 1050           history_value = cur_grad_state.AddForwardAccumulator(cur_value)
   1051           self._grad_context.Enter()
   1052           break

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\ops\control_flow_ops.py in AddForwardAccumulator(self, value, dead_branch)
    906             max_size=maximum_iterations,
    907             elem_type=value.dtype.base_dtype,
--> 908             name="f_acc")
    909         # pylint: enable=protected-access
    910       if curr_ctxt: curr_ctxt.Exit()

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\ops\gen_data_flow_ops.py in _stack_v2(max_size, elem_type, stack_name, name)
   4014     _, _, _op = _op_def_lib._apply_op_helper(
   4015         "StackV2", max_size=max_size, elem_type=elem_type,
-> 4016         stack_name=stack_name, name=name)
   4017     _result = _op.outputs[:]
   4018     _inputs_flat = _op.inputs

c:\users\admin\appdata\local\programs\python\python36\lib\site-packages\tensorflow\python\framework\op_def_library.py in _apply_op_helper(self, op_type_name, name, **keywords)
    531             if input_arg.type != types_pb2.DT_INVALID:
    532               raise TypeError("%s expected type of %s." %
--> 533                               (prefix, dtypes.as_dtype(input_arg.type).name))
    534             else:
    535               # Update the maps with the default, if needed.

TypeError: Input 'max_size' of 'StackV2' Op has type int64 that does not match expected type of int32.

1 个答案:

答案 0 :(得分:1)

我最初使用的是TF的1.5.0版本,已升级到v1.8.0,并且一切正常。问题已解决。