Tensor Shape在keras模型的输入形状中是否重要? R-编程

时间:2018-05-20 19:45:10

标签: r tensorflow neural-network keras

library(png)
library(colorspace)

files <- list.files(pattern = ".png$", recursive = TRUE)
filePathFunction <- function(x){
  a <- matrix(nrow=length(x))
  for(i in 1:length(x)){
    conc <- c("~/E-books/Coding/Machine Learning with R/Neural Networks/four-shapes",x[i])
    a[i] <- paste(conc, collapse="/")
  }
  return(a)
}


filePaths <- filePathFunction(files)
image(readPNG(filePaths[1]), useRaster=TRUE, axes=FALSE)

set.seed(101)
ind <- sample(nrow(filePaths),round(0.75*nrow(filePaths)),replace=F)
train <- filePaths[ind,]
test <- filePaths[-ind,]

这是我对此数据集的代码预处理代码:https://www.kaggle.com/smeschke/four-shapes/discussion/43879 我正在使用r进行深度学习,在本书的例子中,他们将矩阵从28x28x60000和28x28像素矩阵的图片转换为784 * 60000矩阵。像这样:

library(keras)
mnist <- dataset_mnist()
train_images <- mnist$train$x
train_images <- array_reshape(train_images, c(60000, 28 * 28))
train_images <- train_images / 255
test_images <- mnist$test$x
test_images <- array_reshape(test_images, c(10000, 28 * 28))
test_images <- test_images / 255

network <- keras_model_sequential() %>%
layer_dense(units = 512, activation = "relu", input_shape = c(28*28)) %>%
layer_dense(units = 10, activation = "softmax")

我的输入形状和隐藏单位对神经网络的影响是否重要?

1 个答案:

答案 0 :(得分:0)

对于卷积网络而言,重要的是因为根据过滤器/池化层的数量和尺寸(max / avg),您需要确定输入形状。对于密集网络,从理论上讲它并不重要。