如何构造二维卷积层以检测类似“ QR码”的图片中用于深度学习分类的模式?

时间:2019-07-10 11:07:54

标签: r tensorflow keras

我想使用R中的Tensorflow / keras对图片(或更准确地说是热图)进行分类。

我从阅读教程开始,但就我而言,我不想对“大”模式进行分类,例如猫,房屋或9 –而是热图中不断变化的样式。

示例:

enter image description here

我当前的图层代码是:

 model <- keras_model_sequential() 
  model %>%
    #layer_dropout(rate = FLAGS$dropout1) %>%
    layer_conv_2d(filters = FLAGS$convol_filters1, kernel_size = c(3,3), 
                  input_shape = c(dim(data.training)[2], dim(data.training)[2], 1), data_format = "channels_last"
                  , activation = "relu", padding = "same"
                  ) %>%
    layer_conv_2d(filters = FLAGS$convol_filters1, kernel_size = c(3,3), 
                  input_shape = c(dim(data.training)[2], dim(data.training)[2], 1), data_format = "channels_last"
                  , activation = "relu"
                  #, padding = "same"
    ) %>%
    layer_max_pooling_2d(pool_size = c(2,2)) %>%
    layer_dropout(rate = FLAGS$dropout1) %>%
    layer_conv_2d(filters = FLAGS$convol_filters2, kernel_size = c(3,3), 
                  input_shape = c(dim(data.training)[2], dim(data.training)[2], 1), data_format = "channels_last"
                  , activation = "relu", padding = "same"
    ) %>%
    layer_conv_2d(filters = FLAGS$convol_filters2, kernel_size = c(3,3), 
                  input_shape = c(dim(data.training)[2], dim(data.training)[2], 1), data_format = "channels_last"
                  , activation = "relu"
                  #, padding = "same"
    ) %>%
    layer_max_pooling_2d(pool_size = c(2,2)) %>%
    layer_dropout(rate = FLAGS$dropout1) %>%
      layer_conv_2d(filters = FLAGS$convol_filters3, kernel_size = c(3,3), 
                  input_shape = c(dim(data.training)[2], dim(data.training)[2], 1), data_format = "channels_last"
                  , activation = "relu", padding = "same"
    ) %>%
    layer_conv_2d(filters = FLAGS$convol_filters3, kernel_size = c(3,3), 
                  input_shape = c(dim(data.training)[2], dim(data.training)[2], 1), data_format = "channels_last"
                  , activation = "relu"
                  #, padding = "same"
    ) %>%
    layer_max_pooling_2d(pool_size = c(2,2)) %>%
    layer_dropout(rate = FLAGS$dropout1) %>%
    layer_flatten() %>%
    layer_dense(units = FLAGS$dense_units1
    ) %>%   
    layer_activation_relu() %>%
    layer_dropout(rate = FLAGS$dropout1) %>%
    layer_dense(units = FLAGS$dense_units1) %>%   
    layer_activation_relu() %>%
    layer_dropout(rate = FLAGS$dropout1) %>%
    layer_dense(units = FLAGS$dense_units1) %>%   
    layer_activation_relu() %>%
    layer_dropout(rate = FLAGS$dropout1) %>%
    layer_dense(units = 2, activation = 'softmax')

这不会导致很大的训练成功,因为经过几次(几十次)训练后,训练精度基本上为1,但是val_loss火箭和val_accuracy下降。

我认为模型应该可以执行我想要的操作,但是我可能会以错误的方式进行操作。任何指针,我可以从根本上进行更改,然后再进行超参数调整?

0 个答案:

没有答案