在训练模型时,我收到错误 TypeError: 'NoneType' object is not callable

时间:2021-05-30 14:16:40

标签: python tensorflow machine-learning keras

我正在尝试为我的项目训练模型,但出现以下错误:

Epoch 1/100
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-22-099c92a922cd> in <module>
      1 epochs = 100
      2 batch_size = 64
----> 3 history = model.fit(x=[q1_X_train, q2_X_train],
      4                     y=y_train,
      5                     epochs=epochs,

~/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
   1181                 _r=1):
   1182               callbacks.on_train_batch_begin(step)
-> 1183               tmp_logs = self.train_function(iterator)
   1184               if data_handler.should_sync:
   1185                 context.async_wait()

~/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
    887 
    888       with OptionalXlaContext(self._jit_compile):
--> 889         result = self._call(*args, **kwds)
    890 
    891       new_tracing_count = self.experimental_get_tracing_count()

~/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
    915       # In this case we have created variables on the first call, so we run the
    916       # defunned version which is guaranteed to never create variables.
--> 917       return self._stateless_fn(*args, **kwds)  # pylint: disable=not-callable
    918     elif self._stateful_fn is not None:
    919       # Release the lock early so that multiple threads can perform the call

TypeError: 'NoneType' object is not callable

我通过以下方式创建了模型:

def create_model(input_shape,embeddings_dim, embeddings_matrix, vocab_size, max_seq_length, trainable_embeddings, dropout, hidden_units):
   
    # TODO: Add docstring
    X1_input = Input(input_shape, name="input_X1")
    X2_input = Input(input_shape, name="input_X2")

    # Encoding the inputs using the same weights
    # Output shape: (batch_size, max_seq_length, lstm_hidden_units)
    embeddor = Embedding(vocab_size, embeddings_dim, weights=[embeddings_matrix], input_length=max_seq_length, trainable=trainable_embeddings)(X1_input)
    td = TimeDistributed(Dense(embeddings_dim, activation='relu'))(embeddor)
    ld = Lambda(lambda x: K.sum(x, axis=1), output_shape=(embeddings_dim, ))(td)

    embeddor1 = Embedding(vocab_size, embeddings_dim, weights=[embeddings_matrix], input_length=max_seq_length, trainable=trainable_embeddings)(X2_input)
    td1 = TimeDistributed(Dense(embeddings_dim, activation='relu'))(embeddor1)
    ld1 = Lambda(lambda x: K.sum(x, axis=1), output_shape=(embeddings_dim, ))(td1)

    cat = concatenate([ld,ld1])
    X = Dense(hidden_units, activation="relu")(cat)
    X = Dropout(dropout)(X)
    X = Dense(hidden_units, activation="relu")(X)
    X = Dropout(dropout)(X)
    X = Dense(hidden_units, activation="relu")(X)
    X = Dropout(dropout)(X)
    X = Dense(hidden_units, activation="relu")(X)
    X = Dropout(dropout)(X)
    X = Dense(1, activation="sigmoid", name="output")(X)

    model = Model(inputs=[X1_input, X2_input], outputs=X, name="GRN_model")

    optimizer = optimizers.Adam()
    # optimizer = optimizers.RMSprop()
    model.compile(optimizer=optimizer,
                loss="binary_crossentropy",
                metrics=['accuracy', precision, recall, f1_score])
    return model
dropout = 0.2
trainable_embeddings = False
hidden_units = 200
input_shape = (max_len,)
model = create_model(input_shape,embedding_dim, embedding_matrix, vocab_size, max_len, trainable_embeddings, dropout, hidden_units)
model.summary()

用于训练模型

# Defining a helper function to save the model after each epoch 
# in which the loss decreases 
filepath = project_path+'model_paraprase_detection_pad_FFN.h5'
checkpoint = ModelCheckpoint(filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='min')
# Defining a helper function to reduce the learning rate each time 
# the learning plateaus 
reduce_alpha = ReduceLROnPlateau(monitor ='val_loss', factor = 0.2, patience = 1, min_lr = 0.001)
# stop traning if there increase in loss
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1,patience=2)
callbacks = [checkpoint,es,reduce_alpha] 
epochs = 100
batch_size = 64
history = model.fit(x=[q1_X_train, q2_X_train],
                    y=y_train,
                    epochs=epochs,
                    batch_size=batch_size,
                    validation_data=([q1_X_test, q2_X_test], y_test),callbacks=callbacks)

所以在这一点上 model.fit() 我收到这个错误。 我已经检查了每个变量值并打印了一些值。我在 jupyter notebook 中这样做。 张量流版本:

tensorflow              2.5.0              
tensorflow-estimator    2.5.0  

请帮我解决这个错误。

0 个答案:

没有答案