自定义损失函数Keras / R

时间:2020-05-15 01:48:49

标签: r keras loss-function

嗨,我在Keras / R中自定义了两个损失函数。这是为了最大程度地减少列相关相位漂移。

第一

K<- backend()
metric_max<- function(y_true, y_pred) {
  max_pred=k_max(k_abs(y_true-y_pred),axis = -1)
  max_pred=k_cast(max_pred,'float32')
  return(max_pred)
}

第二

metric_loss <- function(y_true, y_pred) {
  y_pred=k_round(y_pred)
  loss=k_not_equal(y_true,y_pred)
  loss=k_cast(loss,'float32')
  loss=k_sum(loss,axis = -1)
  loss=k_max(loss)
  return(loss)
}

第一损失函数运行良好。但是当我运行Second时,我遇到了梯度问题

Error in py_call_impl(callable, dots$args, dots$keywords) : 
  ValueError: No gradients provided for any variable: ['dense_330/kernel:0', 'dense_330/bias:0', 'dense_331/kernel:0', 'dense_331/bias:0', 'dense_332/kernel:0', 'dense_332/bias:0', 'dense_333/kernel:0', 'dense_333/bias:0', 'dense_334/kernel:0', 'dense_334/bias:0']. 

有人可以提供解决方案

源代码

library(keras)

# generate data
x_train <- matrix(runif(5e3*20), nrow = 5e3, ncol = 20)
y_train <- matrix(round(runif(5e3*10, min = 0, max = 1)), nrow = 5e3, ncol = 10)
x_test <- matrix(runif(100*20), nrow = 100, ncol = 20)
y_test <- matrix(round(runif(100*10, min = 0, max = 1)), nrow = 100, ncol = 10)
# create model
model <- keras_model_sequential()
# define and compile the model
model %>% 
  layer_dense(units = 50, activation = 'relu', input_shape = c(45)) %>% 
  layer_dense(units = 30, activation = 'relu') %>% 
  layer_dense(units = 5, activation = 'linear') %>% 
  compile(
    loss = metric_loss,
    optimizer = optimizer_adam(),
    metrics = c('accuracy')
    # train 
model %>% fit(x_train, y_train, epochs = 50, batch_size = 128, validation_split=.2)

0 个答案:

没有答案