具有发生器功能的多输入模型-Keras R

时间:2019-04-19 12:03:29

标签: r image-processing keras regression

我一直在尝试在keras中建立多输入模型。一个输入分支是图像,第二个输入分支是对应图像的metaData。

对于图像,我需要一个生成器功能,该功能将输入成批图像。 metaData采用表格形式。

现在,我想知道如何将数据传递给模型,以便正确的图像将与相应的metaData信息一起处理。供您参考,这将是一个回归任务。

我拥有的输入数据:

  • dir1 /中的图像
  • 具有路径和特征的数据框。

    path       feature1 feature2 target
    image1.jpg 23.5     100      16
    image2.jpg 25.0     88       33
    

我现在拥有的代码:

  • 图片的生成器功能:

    train_datagen <- image_data_generator(rescale = 1/255)
    
    train_generator <- flow_images_from_dataframe(
      dataframe = joined_path_with_metadata,
      directory = 'data_dir',
      x_col = "path",
      y_col = "train",
      generator = train_datagen,
      target_size = c(150, 150),
      batch_size = 20,
      color_mode = 'rgb',
      class_mode = "sparse"
    )
    
  • 模型定义:

    vision_model <- keras_model_sequential() 
    
    vision_model %>% 
      layer_conv_2d(filters = 64, 
                    kernel_size = c(3, 3), 
                    activation = 'relu', 
                    padding = 'same',
                    input_shape = c(150, 150, 3)) %>% 
      layer_max_pooling_2d(pool_size = c(2, 2)) %>% 
      layer_flatten()
    
    # Now let's get a tensor with the output of our vision model:
    image_input <- layer_input(shape = c(150, 150, 3))
    encoded_image <- image_input %>% vision_model
    
    # ANN for tabular data
    
    tabular_input <- layer_input(shape = ncol(dataframe), dtype = 'float32')
    
    mlp_model <- tabular_input %>% 
      layer_dense(
        units              = 16, 
        kernel_initializer = "uniform", 
        activation         = "relu") # Dropout to prevent overfitting
      layer_dropout(rate = 0.1) %>%
      layer_dense(
        units              = 32, 
        kernel_initializer = "uniform", 
        activation         = "relu") %>% 
    
    # concatenate the metadata and the image vector then
    # train a linear regression on it
    output <- layer_concatenate(c(mlp_model, encoded_image)) %>% 
      layer_dense(units = 1, activation='linear')
    
    
    # This is the final model:
    vqa_model <- keras_model(inputs = c(image_input, tabular_input), outputs = output)
    
  • 编译:

    vqa_model %>% compile(
      optimizer = 'adam',
      loss      = 'mean_squared_error',
      metrics   = c('mean_squared_error')
    )
    

,最后一步是拟合模型。我不确定如何执行此操作以确保将第一行功能作为批量读取的图像的元数据。

0 个答案:

没有答案