Keras加载的模型输出不同于训练模型的输出

时间:2020-04-01 14:49:26

标签: keras output conv-neural-network shapes timestep

当我训练模型时,它具有二维输出-它是(none,1)-与我要预测的时间序列相对应。但是,每当我加载保存的模型以进行预测时,它都有一个三维输出-(无,40,1)-其中40对应于适应conv1D网络所需的n_steps。怎么了?

代码如下:

 df = np.load('Principal.npy')


        # Conv1D
    #model = load_model('ModeloConv1D.h5')
    model = autoencoder_conv1D((2, 20, 17), n_steps=40)

    model.load_weights('weights_35067.hdf5')

    # summarize model.
    model.summary()

        # load dataset
    df = df


        # split into input (X) and output (Y) variables
    X = f.separar_interface(df, n_steps=40)
    # THE X INPUT SHAPE (59891, 17) length and attributes, respectively ##    

    # conv1D input format
    X = X.reshape(X.shape[0], 2, 20, X.shape[2])

    # Make predictions    

    test_predictions = model.predict(X)
    ## test_predictions.shape =  (59891, 40, 1)

    test_predictions = model.predict(X).flatten()
    ##test_predictions.shape = (2395640, 1)


    plt.figure(3) 
    plt.plot(test_predictions)
    plt.legend('Prediction')
    plt.show()

在下面的图中,您可以看到它正在绘制输入格式。 enter image description here

这是网络体系结构:

 _________________________________________________________________
    Layer (type)                 Output Shape              Param #   
    =================================================================
    time_distributed_70 (TimeDis (None, 1, 31, 24)         4104      
    _________________________________________________________________
    time_distributed_71 (TimeDis (None, 1, 4, 24)          0         
    _________________________________________________________________
    time_distributed_72 (TimeDis (None, 1, 4, 48)          9264      
    _________________________________________________________________
    time_distributed_73 (TimeDis (None, 1, 1, 48)          0         
    _________________________________________________________________
    time_distributed_74 (TimeDis (None, 1, 1, 64)          12352     
    _________________________________________________________________
    time_distributed_75 (TimeDis (None, 1, 1, 64)          0         
    _________________________________________________________________
    time_distributed_76 (TimeDis (None, 1, 64)             0         
    _________________________________________________________________
    lstm_17 (LSTM)               (None, 100)               66000     
    _________________________________________________________________
    repeat_vector_9 (RepeatVecto (None, 40, 100)           0         
    _________________________________________________________________
    lstm_18 (LSTM)               (None, 40, 100)           80400     
    _________________________________________________________________
    time_distributed_77 (TimeDis (None, 40, 1024)          103424    
    _________________________________________________________________
    dropout_9 (Dropout)          (None, 40, 1024)          0         
    _________________________________________________________________
    dense_18 (Dense)             (None, 40, 1)             1025      
    =================================================================

1 个答案:

答案 0 :(得分:0)

当我发现自己的错误时,并且我认为这可能对其他人有用,因此我将回答我自己的问题: 实际上,网络输出具有与训练数据集标签相同的格式。这意味着,保存的模型正在生成形状(无,40、1)的输出,因为它与您(我)为训练输出标签赋予的形状完全相同。

您(即我)欣赏训练时的网络输出与预测时的网络之间的差异,因为您最有可能在训练时使用诸如train_test_split之类的方法来随机化网络输出。因此,培训结束时您看到的是该随机批次的生产。

为了纠正您的问题(我的问题),您应该将数据集标签的形状从(None,40,1)更改为(None,1),因为您在时间序列上存在回归问题。为了解决上述问题,最好在密集输出层之前设置一个扁平层。因此,我会得到您想要的结果。

相关问题