我在Keras录制'val_loss'和'val_acc'时遇到了麻烦。 'loss'和'acc'很容易,因为它们总是记录在model.fit的历史中。
如果在fit
中启用了验证,则会记录'val_loss',如果启用了验证和准确性监控,则会记录val_acc
。但是,这是什么意思?
我的节点是model.fit(train_data,train_labels,epochs = 64,batch_size = 10,shuffle = True,validation_split = 0.2,callbacks = [history])。
如您所见,我使用5倍交叉验证并对数据进行随机播放。在这种情况下,如何在'fit'中启用'validation'来记录'val_loss'和'val_acc'?
由于
答案 0 :(得分:1)
更新:val_accuracy
字典键今天似乎不再起作用。不知道为什么,但是尽管OP询问如何记录它,但我还是从这里删除了该代码(而且,丢失是交叉验证结果比较的真正重要内容。)
使用Python 3.7和Tensorflow 2.0,经过多次搜索,猜测和反复失败,以下内容对我有用。我从别人的脚本开始,将需要的内容写到.json
文件中;它会在每次训练运行时生成一个这样的.json
文件,显示每个时期的验证损失,因此您可以查看模型如何收敛(或不收敛);会记录准确性,但不作为性能指标。
注意::您需要填写yourTrainDir
,yourTrainingData
,yourValidationData
,yourOptimizer
,yourLossFunctionFromKerasOrElsewhere
,{{1 }}等来启用此代码:
yourNumberOfEpochs
这将在import numpy as np
import os
import tensorflow as tf
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint, LambdaCallback
import json
model.compile(
optimizer=yourOptimizer,
loss=yourLossFunctionFromKerasOrElsewhere()
)
# create a custom callback to enable future cross-validation efforts
yourTrainDir = os.getcwd() + '/yourOutputFolderName/'
uniqueID = np.random.randint(999999) # To distinguish validation runs by saved JSON name
epochValidationLog = open(
yourTrainDir +
'val_log_per_epoch_' +
'{}_'.format(uniqueID) +
'.json',
mode='wt',
buffering=1
)
ValidationLogsCallback = LambdaCallback(
on_epoch_end = lambda epoch,
logs: epochValidationLog.write(
json.dumps(
{
'oneIndexedEpoch': epoch + 1,
'Validationloss': logs['val_loss']
}
) + '\n'
),
on_train_end = lambda logs: epochValidationLog.close()
)
# set up the list of callbacks
callbacksList = [
ValidationLogsCallback,
EarlyStopping(patience=40, verbose=1),
]
results = model.fit(
x=yourTrainingData,
steps_per_epoch=len(yourTrainingData),
validation_data=yourValidationData,
validation_steps=len(yourValidationData),
epochs=yourNumberOfEpochs,
verbose=1,
callbacks=callbacksList
)
文件夹中生成一个JSON文件,将每个训练时期的验证损失和准确性记录为自己的类似于字典的项目。请注意,将时代编号编入索引,以TrainDir
开始,因此它与张量流的输出匹配,而不与Python中的实际索引匹配。
我正在输出到.JSON文件,但可以是任何东西。这是我的代码,用于分析生成的JSON文件;我本可以将所有内容放在一个脚本中,但没有。
1
最后的代码块创建一个JSON,以总结上面产生的import os
from pathlib import Path
import json
currentDirectory = os.getcwd()
outFileName = 'CVResults.json'
outFile = open(outFileName, mode='wt')
validationLogPaths = Path().glob('val_log_per_epoch_*.json')
# Necessary list to detect short unique IDs for each training session
stringDecimalDigits = [
'1',
'2',
'3',
'4',
'5',
'6',
'7',
'8',
'9',
'0'
]
setStringDecimalDigits = set(stringDecimalDigits)
trainingSessionsList = []
# Load the JSON files into memory to allow reading.
for validationLogFile in validationLogPaths:
trainingUniqueIDCandidate = str(validationLogFile)[18:21]
# Pad unique IDs with fewer than three digits with zeros at front
thirdPotentialDigitOfUniqueID = trainingUniqueIDCandidate[2]
if setStringDecimalDigits.isdisjoint(thirdPotentialDigitOfUniqueID):
secondPotentialDigitOfUniqueID = trainingUniqueIDCandidate[1]
if setStringDecimalDigits.isdisjoint(secondPotentialDigitOfUniqueID):
trainingUniqueID = '00' + trainingUniqueIDCandidate[:1]
else:
trainingUniqueID = '0' + trainingUniqueIDCandidate[:2]
else:
trainingUniqueID = trainingUniqueIDCandidate
trainingSessionsList.append((trainingUniqueID, validationLogFile))
trainingSessionsList.sort(key=lambda x: x[0])
# Analyze and export cross-validation results
for replicate in range(len(dict(trainingSessionsList).keys())):
validationLogFile = trainingSessionsList[replicate][1]
fileOpenForReading = open(
validationLogFile, mode='r', buffering=1
)
with fileOpenForReading as openedFile:
jsonValidationData = [json.loads(line) for line in openedFile]
bestEpochResultsDict = {}
oneIndexedEpochsList = []
validationLossesList = []
for line in range(len(jsonValidationData)):
tempDict = jsonValidationData[line]
oneIndexedEpochsList.append(tempDict['oneIndexedEpoch'])
validationLossesList.append(tempDict['Validationloss'])
trainingStopIndex = min(
range(len(validationLossesList)),
key=validationLossesList.__getitem__
)
bestEpochResultsDict['Integer_unique_ID'] = trainingSessionsList[replicate][0]
bestEpochResultsDict['Min_val_loss'] = validationLossesList[trainingStopIndex]
bestEpochResultsDict['Last_train_epoch'] = oneIndexedEpochsList[trainingStopIndex]
outFile.write(json.dumps(bestEpochResultsDict, sort_keys=True) + '\n')
outFile.close()
中的内容:
CVResults.json
答案 1 :(得分:1)
使用Keras的ModelCheckpoint类可以保存val_loss
和val_acc
的数据。
from keras.callbacks import ModelCheckpoint
checkpointer = ModelCheckpoint(filepath='yourmodelname.hdf5',
monitor='val_loss',
verbose=1,
save_best_only=False)
history = model.fit(X_train, y_train, epochs=100, validation_split=0.02, callbacks=[checkpointer])
history.history.keys()
# output
# dict_keys(['val_loss', 'val_mae', 'val_acc', 'loss', 'mae', 'acc'])
重要的一点是,如果省略validation_split
属性,则只会得到loss
,mae
和acc
的值。
希望这会有所帮助!
答案 2 :(得分:0)
根据Keras文档,我们提供了
models.fit
方法:
fit(x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None)
'val_loss' is recorded if validation is enabled in fit, and val_accis recorded if validation and accuracy monitoring are enabled.
-这来自 keras.callbacks.Callback()对象,如果用于上述fit方法中的callbacks参数。可以按以下方式使用:
from keras.callbacks import Callback
logs = Callback()
model.fit(train_data, train_labels,epochs = 64, batch_size = 10,shuffle = True,validation_split = 0.2, callbacks=[logs])
# Instead of using the history callback, which you've used.
如果在fit
中启用了验证,则会记录'val_loss',这意味着:在使用model.fit方法时,您将使用validatoin_split
参数或validation_data
参数{{1 }}。
“历史记录”对象。其“ History.history”属性记录了 在连续时期也训练损失值和度量值 作为验证损失值和验证指标值(如果 适用)。 - Keras文档(model.fit方法的返回值)
在下面的模型中:
to specify the tuple (x_val, y_val) or tuple (x_val, y_val, val_sample_weights) on which to evaluate the loss and any model metrics at the end of each epoch.
如果您使用变量来保存model.fit,则您正在使用历史记录回调,如下所示:
model.fit(train_data, train_labels,epochs = 64,batch_size = 10,shuffle = True,validation_split = 0.2, callbacks=[history])
history.history 将为您输出带有history = model.fit(train_data, train_labels,epochs = 64,batch_size = 10,shuffle = True,validation_split = 0.2, callbacks=[history])
history.history
,loss
,acc
和val_loss
的字典: :
val_acc
您可以使用以下注释中给出的csvlogger来保存数据,也可以使用此处writing a dictionary to a csv
使用将字典写入csv文件的较长方法来保存数据{'val_loss': [14.431451635814849,
14.431451635814849,
14.431451635814849,
14.431451635814849,
14.431451635814849,
14.431451635814849,
14.431451635814849,
14.431451635814849,
14.431451635814849,
14.431451635814849],
'val_acc': [0.1046428571712403,
0.1046428571712403,
0.1046428571712403,
0.1046428571712403,
0.1046428571712403,
0.1046428571712403,
0.1046428571712403,
0.1046428571712403,
0.1046428571712403,
0.1046428571712403],
'loss': [14.555215610322499,
14.555215534028553,
14.555215548560733,
14.555215588524229,
14.555215592157273,
14.555215581258137,
14.555215575808571,
14.55521561940511,
14.555215563092913,
14.555215624854679],
'acc': [0.09696428571428571,
0.09696428571428571,
0.09696428571428571,
0.09696428571428571,
0.09696428571428571,
0.09696428571428571,
0.09696428571428571,
0.09696428571428571,
0.09696428571428571,
0.09696428571428571]}