有没有办法在每次迭代中获得时期数(nb_epochs)的值?

时间:2017-09-21 09:38:50

标签: neural-network keras

我想设计一个新的正规化器,我需要在训练期间获得当前时期的价值。

例如

<?php
  include('config.php');
  $return_arr = array();
  $term = $_GET['term'];
  $term = str_replace('.','',$term);
  $sql = "SELECT * FROM submission where keyword like '".$term."%' or keyword 
  like '%".$term."%' ORDER BY CASE WHEN keyword LIKE '".$term."%' THEN 1
  ELSE 2 END";
  $r = mysqli_query($link,$sql);
  while($row = mysqli_fetch_assoc($r))
  {
    $key = explode(",", $row['keyword']);
    foreach ($key as $keyword) 
    {
      $return_arr[] = $keyword;
    }  
  }
  echo json_encode($return_arr);
?>

我希望在训练期间得到7和8的值。

如何在keras中完成?

2 个答案:

答案 0 :(得分:3)

正则化器在模型构建过程中仅被称为一次,因此非常棘手。在Layer.add_weight()

    if regularizer is not None:
        self.add_loss(regularizer(weight))

一旦使用regularizer(weight)获得额外的正则化损失张量并将其添加到模型中,则正则化对象本身是无用的并且被丢弃。因此,在正规化器对象中记录时期(作为intfloat)将无法正常工作。

如果你想要一个可以在训练期间操纵的值,你必须使纪元成为Variable,并将其包含在正则化损失张量的计算中。例如,

epoch_variable = K.variable(0.)

class MyRegularizer(Regularizer):
    def __init__(self, epoch_variable):
        self.epoch_variable = epoch_variable

    def __call__(self, x):
        # just to show that epoch is updated and used in loss computation
        return self.epoch_variable ** 2

model = Sequential()
model.add(Dense(100, input_shape=(10,), kernel_regularizer=MyRegularizer(epoch_variable)))
model.add(Dense(1))
model.compile(loss='binary_crossentropy', optimizer='adam')

要更新epoch_variable的值,请使用自定义回调:

class MyCallback(Callback):
    def __init__(self, epoch_variable):
        self.epoch_variable = epoch_variable

    def on_epoch_begin(self, epoch, logs=None):
        K.set_value(self.epoch_variable, epoch + 1)

model.fit(X, Y, callbacks=[MyCallback(epoch_variable)])

您应该看到类似的内容:

Epoch 1/10
100/100 [==============================] - 0s - loss: 3.0042
Epoch 2/10
100/100 [==============================] - 0s - loss: 4.9652
Epoch 3/10
100/100 [==============================] - 0s - loss: 9.9544
Epoch 4/10
100/100 [==============================] - 0s - loss: 16.7814
Epoch 5/10
100/100 [==============================] - 0s - loss: 25.7923
Epoch 6/10
100/100 [==============================] - 0s - loss: 36.7659
Epoch 7/10
100/100 [==============================] - 0s - loss: 49.7384
Epoch 8/10
100/100 [==============================] - 0s - loss: 64.7239
Epoch 9/10
100/100 [==============================] - 0s - loss: 81.7514
Epoch 10/10
100/100 [==============================] - 0s - loss: 100.7349

答案 1 :(得分:0)

如果您使用回调,则可以访问每个案例的纪元,批次和日志。

LambdaCallback是一个不错的选择:

from keras.callbacks import LambdaCallback

def epochStart(epoch,logs):
    #do stuff when an epoch starts

    #do stuff with the number of the 'epoch' 
        #(starting from 0, different from the written outputs in your question)

    #do stuff with the logs, which is a dictionary with the 'loss', 'val_loss'    
        #and other metrics you may have used in compile, 
        #such as 'acc', 'val_acc'
        #you may print(logs) to see everything   

def epochEnd(epoch,logs):

    #do stuff when an epoch ends
    #same idea as above


myCallback = LambdaCallback(on_epoch_begin=epochStart,on_epoch_end=epochEnd)

训练时,传递一系列回调:

model.fit(X,Y,....., callbacks=[myCallback])