如何减少喀拉拉邦的损失

时间:2020-05-06 08:24:44

标签: python keras

enter code here我正在尝试训练一个基本的MLP,即使经过越来越长的时间,损失也不会减少。

   xtraindemo = np.arange(1,1001,1)
   ytraindemo =  np.arange(5,1005,1)

        X_train1, X_val1, y_train1, y_val1 = train_test_split(xtraindemo, ytraindemo, test_size=0.1, random_state=42)
        X_train1 = X_train1.reshape(-1,1)
        xtest1 = np.arange(2000,2500,1)

        model = Sequential()
        model.add(Dense(2048,kernel_initializer='uniform', activation='relu', input_shape=(1,)))
        #model.add(Dropout(0.2))
        model.add(Dense(1024, activation='relu'))
        #model.add(Dropout(0.2))
        model.add(Dense(1, activation='relu'))

        sgd = optimizers.SGD(lr=0.0001, decay=1e-6, momentum=0.9, nesterov=True)
        model.compile(loss='mean_squared_error', optimizer=sgd,metrics=['accuracy'])
        model.summary()
        history = model.fit(X_train1,y_train1, validation_data=(X_val1, y_val1), epochs=100, callbacks=[],batch_size=64)

        ypred = model.predict(xtest1)
        print(ypred)


    Epoch 3/100
    900/900 [==============================] - 1s 699us/step - loss: 339215.4711 - accuracy: 0.0000e+00 - val_loss: 325595.7388 - val_accuracy: 0.0000e+00
    Epoch 4/100
    900/900 [==============================] - 1s 724us/step - loss: 339215.4744 - accuracy: 0.0000e+00 - val_loss: 325595.7388 - val_accuracy: 0.0000e+00
    Epoch 5/100
    900/900 [==============================] - 1s 712us/step - loss: 339215.4744 - accuracy: 0.0000e+00 - val_loss: 325595.7388 - val_accuracy: 0.0000e+00
    Epoch 6/100
    900/900 [==============================] - 1s 715us/step - loss: 339215.4711 - accuracy: 0.0000e+00 - val_loss: 325595.7388 - val_accuracy: 0.0000e+00
    Epoch 7/100
    900/900 [==============================] - 1s 703us/step - loss: 339215.4733 - accuracy: 0.0000e+00 - val_loss: 325595.7388 - val_accuracy: 0.0000e+00
    Epoch 8/100
    900/900 [==============================] - 1s 721us/step - loss: 339215.4744 - accuracy: 0.0000e+00 - val_loss: 325595.7388 - val_accuracy: 0.0000e+00
    Epoch 9/100
    900/900 [==============================] - 1s 707us/step - loss: 339215.4711 - accuracy: 0.0000e+00 - val_loss: 325595.7388 - val_accuracy: 0.0000e+00
    Epoch 10/100
    900/900 [==============================] - 1s 727us/step - loss: 339215.4756 - accuracy: 0.0000e+00 - val_loss: 325595.7388 - val_accuracy: 0.0000e+00
    Epoch 11/100
    900/900 [==============================] - 1s 692us/step - loss: 339215.4733 - accuracy: 0.0000e+00 - val_loss: 325595.7388 - val_accuracy: 0.0000e+00
    Epoch 12/100
    900/900 [==============================] - 1s 705us/step - loss: 339215.4733 - accuracy: 0.0000e+00 - val_loss: 325595.7388 - val_accuracy: 0.0000e+00
    Epoch 13/100
    900/900 [==============================] - 1s 704us/step - loss: 339215.4767 - accuracy: 0.0000e+00 - val_loss: 325595.7388 - val_accuracy: 0.0000e+00
    Epoch 14/100
    900/900 [==============================] - 1s 704us/step - loss: 339215.4700 - accuracy: 0.0000e+00 - val_loss: 325595.7388 - val_accuracy: 0.0000e+00`enter code here`

0 个答案:

没有答案