训练精度为 0,而 keras 中的 LSTM 模型在训练期间损失减少

时间:2021-04-27 11:06:59

标签: python tensorflow keras deep-learning lstm

我正在训练一个包含 7 个 LSTM 单元和一个单元 sigmoid 层的小型 LSTM 模型。损失函数中的损失减少了,但准确度保持为 0。它永远不会改变。你能告诉我为什么会这样吗?如果您需要这些文件,请发表评论。

import pandas
import scipy.io as loader
import tensorflow as tf
import keras
import numpy
import time
import math
from tensorflow.keras.datasets import imdb
from tensorflow.keras.layers import Embedding, Dense, LSTM
from tensorflow.keras.losses import BinaryCrossentropy
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import SGD
from tensorflow.keras.preprocessing.sequence import pad_sequences

additional_metrics = ['accuracy']
loss_function = BinaryCrossentropy()
number_of_epochs = 500
optimizer = SGD()
validation_split = 0.20
verbosity_mode = 1
mini = 0
maxi  = 0
mean = 0

def myfunc(arg): # mean normalization
    global mini, maxi, mean
    return (arg - mean) / (maxi - mini)


cgm = numpy.load('cgm_train_new.npy')
labels = numpy.load('labels_train_new.npy')
labs = list()
cgm_flat = cgm.flatten()
mini = min(cgm_flat)
maxi = max(cgm_flat)
mean = sum(cgm_flat) / len(cgm_flat)
cgm = numpy.apply_along_axis(myfunc, 0, cgm)

for each in labels:
    if each[-1] == 1: labs.append(.99)
    else: labs.append(.01)

RNNmodel = Sequential()
RNNmodel.add(LSTM(7, activation='tanh'))
RNNmodel.add(Dense(1, activation='sigmoid'))
RNNmodel.compile(optimizer=optimizer, loss=loss_function, metrics=additional_metrics)
cgm_rs = tf.reshape(cgm, [len(cgm), 7, 1])
history = RNNmodel.fit(
    cgm_rs,
    tf.reshape(labs, [len(labs), 1, 1]),
    batch_size=len(labs),
    epochs=number_of_epochs,
    verbose=verbosity_mode)


answers = RNNmodel.predict(cgm_rs)
for each in answers:
    print(each)

CGM file
Labels

0 个答案:

没有答案