亚当方法学习率高吗?

时间:2019-11-22 11:53:50

标签: python machine-learning

我正在尝试估计收缩压。我将PPG功能(27)放入了ANN。我得到的结果如下。学习率好吗?如果不是,它是高还是低?这是我的结果。

我将学习率设置为0.000001。我认为它仍然太高。我认为它下降得太快了。

损耗:5.1285-女士:57.7257-val_loss:6.0154-val_mse:73.9671

# import data
data = pandas.read_csv("data.csv", sep=",")
data = data[["cp", "st", "dt", "sw10", "dw10", "sw10+dw10", "dw10/sw10", "sw25", "dw25",
             "sw25+dw25", "dw25/sw25", "sw33", "dw33", "sw33+dw33", "dw33/sw33", "sw50",
             "dw50", "sw50+dw50", "dw50/sw50", "sw66", "dw66", "sw66+dw66", "dw66/sw66",
             "sw75", "dw75", "sw75+dw75", "dw75/sw75", "sys"]]

# data description
described_data = data.describe()
print(described_data)
print(len(data))

# # histograms of input data (features)
# data.hist(figsize=(12, 10))
# plt.show()

# index and shuffle data
data.reset_index(inplace=True, drop=True)
data = data.reindex(numpy.random.permutation(data.index))

# x (parameters) and y (blood pressure) data
predict = "sys"
X = numpy.array(data.drop([predict], 1))
y = numpy.array(data[predict])

# Splitting the total data into subsets: 90% - training, 10% - testing
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, test_size=0.1, random_state=0)


def feature_normalize(X):   # standardization function
    mean = numpy.mean(X, axis=0)
    std = numpy.std(X, axis=0)
    return (X - mean) / std


# Features scaling
X_train_standardized = feature_normalize(X_train)
X_test_standardized = feature_normalize(X_test)

# Build the ANN model
model = Sequential()

# Adding the input layer and the first hidden layer
model.add(Dense(25, activation='sigmoid', input_dim=27))
# Adding the second hidden layer
model.add(Dense(units=15, activation='sigmoid'))
# Adding the output layer
model.add(Dense(units=1, activation='linear', kernel_initializer='normal'))
model.summary()
optimizer = keras.optimizers.Adam(learning_rate=0.000001)

# Compiling the model
model.compile(loss='mae', optimizer='adam', metrics=['mse'])


#Early stopping to prevent overfitting
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=10, verbose=1, mode='auto',
                        restore_best_weights=True)
# Fitting the ANN to the Training set
history = model.fit(X_train_standardized, y_train, validation_split=0.2, verbose=2, epochs=1000, batch_size=5)

data loss

prediction

1 个答案:

答案 0 :(得分:2)

未使用您的学习率,因为您没有使用Lorem Ipsum simply dummy text -TOKEN_ABC- yes 实例编译模型。

    -               # a hyphen
    [A-Z]+          # 1 or more capitals
    (?:             # non capture group
      _             # underscore
      [A-Z]+        # 1 or more capitals
    )*              # end group, may appear 0 or more times
    -               # a hyphen
    (*SKIP)         # forget the match
    (*FAIL)         # and fail
  |                 # OR
    [^\w\s]+        # 1 or more non word characters or spaces

应该是:

optimizer

关于问题本身:正如《混血王子》所说,在不知道您的数据集的情况下很难说。此外,数据本身的状况很重要。我确实会建议以下内容:

相关问题