R keras实现CNN多任务的总和必须为100

时间:2018-09-27 01:36:10

标签: r tensorflow keras deep-learning

我已经在R中使用Keras创建了一个模型。我想进行多任务回归,其中包含一些共享层,然后是一系列完全连接的层,以及一个最终大小为1的最终完全连接的层,该层对应于最终预测。

现在假设我有三个输出Y1,Y2,Y3。我希望输出Y1和Y2的总和为100,而每个输出必须具有自己的损失函数(我想将权重应用于观测值)。

我已经建立了我的模型,当我不添加sum(Y1 + Y2)= 100的约束时,它可以很好地工作,但是我不能使其与约束一起工作。我尝试使用softmax图层,但每个输出返回1。

我提供了图形和一些示例代码。这确实是一个实现问题,因为我认为这是可能的(使用softmax可能很容易。)

base.model <- keras_model_sequential()
input <- layer_input(shape=c(NULL, 3,6,6))

base.model <- input %>%
   layer_conv_2d(filter = 64, kernel_size = c(3,3), input_shape = c(NULL, 3,6,6), padding='same',data_format='channels_first' ) %>%
   layer_activation("relu") %>%
   layer_max_pooling_2d(pool_size = c(2,2)) %>%
   layer_conv_2d(filter = 20, kernel_size = c(2,2), padding = "same", activation = "relu") %>%
   layer_dropout(0.4) %>%
   layer_flatten()

# add outputs
Y1 <- base.model %>% 
   layer_dense(units = 40) %>%
   layer_dropout(rate = 0.3) %>% 
   layer_dense(units = 50) %>%
   layer_dropout(rate = 0.3) %>% 
   layer_dense(units = 1, name="Y1")

# add outputs
Y2 <- base.model %>% 
   layer_dense(units = 40) %>%
   layer_dropout(rate = 0.3) %>% 
   layer_dense(units = 50) %>%
   layer_dropout(rate = 0.3) %>% 
   layer_dense(units = 1, name="Y2")

# add outputs
Y3 <- base.model %>% 
   layer_dense(units = 40) %>%
   layer_dropout(rate = 0.3) %>% 
   layer_dense(units = 50) %>%
   layer_dropout(rate = 0.3) %>% 
   layer_dense(units = 1, name="Y3")

base.model <- keras_model(input,list(Y1,Y2,Y3)) %>%
compile(
  loss = "mean_squared_error",
  optimizer = 'adam',
  loss_weights=list(Y1=1.0, Y2=1.0, Y3=1.0)
)

history <- base.model %>% fit(
x = Xtrain, 
y = list(Y1 = Ytrain.y1, Y2 = Ytrain.y2, Y3 = Ytrain.y3),
epochs = 500, batch_size = 500,
sample_weights = list(Y1= data$weigth.y1[sp_new], Y2= data$weigth.y2[sp_new] Y3= data$weigth.y3[sp_new]),
validation_split = 0.2)

总体思路可以用一个图来概括: https://www.dropbox.com/s/ueclq42of46ifig/graph%20CNN.JPG?dl=0

现在,如果我尝试使用softmax图层,我会这样做:

soft.l <- layer_dense(units = 1, activation = 'softmax')

Y11 <- Y1 %>% soft.l %>% layer_dense(units = 1, name="Y11", trainable = T)
Y22 <- Y2 %>% soft.l %>% layer_dense(units = 1, name="Y11", trainable = T)

然后它变成:

base.model <- keras_model(input,list(Y11,Y22,Y3)) %>%
compile(
  loss = "mean_squared_error",
  optimizer = 'adam',
  loss_weights=list(Y11=1.0, Y22=1.0, Y3=1.0)
)

history <- base.model %>% fit(
x = Xtrain, 
y = list(Y11 = Ytrain.y1, Y22 = Ytrain.y2, Y3 = Ytrain.y3),
epochs = 500, batch_size = 500,
sample_weights = list(Y11= data$weigth.y1[sp_new], Y22= data$weigth.y2[sp_new] Y3= data$weigth.y3[sp_new]),
validation_split = 0.2)

(base.model %>% predict(Xtest))[[1]] + (base.model %>% predict(Xtest))[[2]] 

问题在于预测的总和(Y11 + Y22)与1不同。我做错了什么吗?

1 个答案:

答案 0 :(得分:0)

我分享的答案可能会对其他人有所帮助。使用连接层和softmax激活函数可以轻松实现该解决方案,该函数使所有层的输出总和为1:

# same first part as before
base.model <- keras_model_sequential()
input <- layer_input(shape=c(NULL, 3,6,6))

base.model <- input %>%
    layer_conv_2d(filter = 64, kernel_size = c(3,3), input_shape = c(NULL, 3,6,6), padding='same', data_format='channels_first' ) %>%
    layer_activation("relu") %>%
    layer_max_pooling_2d(pool_size = c(2,2)) %>%
    layer_conv_2d(filter = 20, kernel_size = c(2,2), padding = "same", activation = "relu") %>%
    layer_dropout(0.4) %>%
    layer_flatten()

# add outputs
Y1 <- base.model %>% 
    layer_dense(units = 40) %>%
    layer_dropout(rate = 0.3) %>% 
    layer_dense(units = 50) %>%
    layer_dropout(rate = 0.3) %>% 
    layer_dense(units = 1, name="Y1")

# add outputs
Y2 <- base.model %>% 
    layer_dense(units = 40) %>%
    layer_dropout(rate = 0.3) %>% 
    layer_dense(units = 50) %>%
    layer_dropout(rate = 0.3) %>% 
    layer_dense(units = 1, name="Y2")

# add outputs
Y3 <- base.model %>% 
    layer_dense(units = 40) %>%
    layer_dropout(rate = 0.3) %>% 
    layer_dense(units = 50) %>%
    layer_dropout(rate = 0.3) %>% 
    layer_dense(units = 1, name="Y3")

## NEW
# add a layer that brings together Y1 and Y2
combined <- layer_concatenate(c(Y1, Y2)) %>% layer_activation_softmax(name= 'combined')


base.model <- keras_model(input,list(combined,Y3)) %>% compile(
    loss = "mean_squared_error",
    optimizer = 'adam',
    loss_weights=list(combined = c(1.0,1.0), Y3=1.0)
)

history <- base.model %>% fit(
    x = Xtrain, 
    y = list(combined = cbind(Ytrain.y1, Ytrain.y2), Y3 = Ytrain.y3),
    epochs = 500, batch_size = 500,
    sample_weights = list(combined = cbind(data$weigth.y1[sp_new], data$weigth.y2[sp_new]) Y3= 
    data$weigth.y3[sp_new]),
    validation_split = 0.2)