为什么精度恒定但损耗却会变化?

时间:2020-07-18 17:58:43

标签: python tensorflow keras

如下所示,我有两个功能,get_data()输出所选资产历史记录的数据帧,并将其传递给train_model(),一切正常,但是随着模型的训练,准确性似乎不高改变损耗确实会下降,但是精度在第二个时期后保持不变,当训练1000个时期时,精度也不会改变

我尝试使用此代码更改的内容:

  1. 更改每个LSTM层的单位计数
  2. 使用来自不同来源的不同net数据帧(alpha-vantage)
  3. 更改时期计数

不幸的是,什么都没有改变

def train_model( df):

    if not os.path.exists("/py_stuff/"):
        os.makedirs("/py_stuff/")


    checkpoint_filepath ="/py_stuff/check_point"
    weights_checkpoint = "/py_stuff/"


    checkpoint_dir = os.path.dirname(checkpoint_filepath)


    model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
        filepath=checkpoint_filepath,
        save_weights_only=True,
        monitor='accuracy',
        mode='max',
        save_best_only=True,
        verbose=1)


    dataset_train = df
    training_set = dataset_train.iloc[:, 1:2].values

    sc = MinMaxScaler(feature_range=(0,1))
    training_set_scaled = sc.fit_transform(training_set)

    X_train = []
    y_train = []
    for i in range(100, len(df)):
        X_train.append(training_set_scaled[i-100:i, 0])
        y_train.append(training_set_scaled[i, 0])
    X_train, y_train = np.array(X_train), np.array(y_train)
    X_train = np.reshape(X_train, (X_train.shape[0], X_train.shape[1], 1))


    model = Sequential()
    model.add(LSTM(units = 100, return_sequences = True, input_shape = (X_train.shape[1], 1)))
    model.add(Dropout(0.2))

    model.add(LSTM(units=100 ,  return_sequences=True))
    model.add(Dropout(0.2))
    model.add(LSTM(units=100 , return_sequences=True))
    model.add(Dropout(0.2))
    model.add(LSTM(units=100))
    model.add(Dropout(0.2))
    model.add(Dense(units=1))
    model.compile(optimizer='adam', loss='mean_squared_error' , metrics=['accuracy'])

        ## loading weights 

    try:
        model.load_weights(checkpoint_filepath)
        print ("Weights loaded successfully $$$$$$$ ")
    except:
        print ("No Weights Found !!! ")



    model.fit(X_train,y_train,epochs=50,batch_size=100, callbacks=[model_checkpoint_callback])

    ## saving weights 


    try:
        model.save(checkpoint_filepath)
        model.save_weights(filepath=checkpoint_filepath)

        print ("Saving weights and model done ")

    except OSError as no_model:
        print ("Error saving weights and model !!!!!!!!!!!! ")





def get_data(CHOICE):
        data = yf.download(  # or pdr.get_data_yahoo(...
        # tickers list or string as well
                tickers = CHOICE,

        # use "period" instead of start/end
        # valid periods: 1d,5d,1mo,3mo,6mo,1y,2y,5y,10y,ytd,max
        # (optional, default is '1mo')
                period = "5y",

        # fetch data by interval (including intraday if period < 60 days)
        # valid intervals: 1m,2m,5m,15m,30m,60m,90m,1h,1d,5d,1wk,1mo,3mo
        # (optional, default is '1d')
                interval = "1d",

        # group by ticker (to access via data['SPY'])
        # (optional, default is 'column')
                group_by = 'ticker',

        # adjust all OHLC automatically
        # (optional, default is False)
                auto_adjust = True,

        # download pre/post regular market hours data
        # (optional, default is False)
                prepost = True,

        # use threads for mass downloading? (True/False/Integer)
        # (optional, default is True)
                threads = True,

        # proxy URL scheme use use when downloading?
        # (optional, default is None)
                proxy = None
        )

        dff = pd.DataFrame(data)
        return dff





df = get_data(CHOICE="BTC-USD")



train_model(df)

2 个答案:

答案 0 :(得分:1)

从损失函数看,您似乎具有回归网络。您的损失是均方误差,度量精度对于回归网络没有任何意义。准确性度量标准仅在用于分类模型时才有意义。因此,您可以从编译代码中删除metrics = ['accuracy'],然后使用损失值评估模型。因此,如果损失在减少,则意味着您的优化器正在成功地训练网络。

答案 1 :(得分:1)

您要处理的是精度未定义的回归问题

准确度定义为属于特定类别的概率。例如,输出的概率是9。数字的类别是有限的(或可数的)。

在您的情况下,您的网络输出一个实数。在这种情况下,准确性的概念毫无意义。

例如,您的输出为1.000的概率为0。尽管(而且令人惊讶!),概率为零并不意味着该事件永远不会发生!

理想情况下,Keras应该返回错误,提示未定义准确性