我是深度学习和keras / tensorflow的初学者。 我遵循了关于tensorflow.org的第一篇教程:时尚MNIST的基本分类。
在这种情况下,输入数据是60000张28x28图像,模型是这样的:
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
编译:
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
训练结束时,模型具有以下准确性:
10000/10000 [==============================] - 0s 21us/step
Test accuracy: 0.8769
没关系。 现在,我试图用另一组数据来复制该模型。新输入是从kaggle下载的数据集。
数据集中包含大小不同的狗和猫的图像,因此我创建了一个简单的脚本来获取图像,将其调整为28x28像素大小并转换为numpy数组。
这是执行此操作的代码:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
from tensorflow.keras.models import load_model
from PIL import Image
import os
# Helper libraries
import numpy as np
# base path dataset
base_path = './dataset/'
training_path = base_path + "training_set/"
test_path = base_path + "test_set/"
# size rate of images
size = 28, 28
#
train_images = []
train_labels = []
test_images = []
test_labels = []
classes = ['dogs', 'cats']
# Scorre sulle cartelle contenute nel path e trasforma le immagini in nparray
def from_files_to_nparray(path):
images = []
labels = []
for subfolder in os.listdir(path):
if subfolder == '.DS_Store':
continue
for image_name in os.listdir(path + subfolder):
if not image_name.endswith('.jpg'):
continue
img = Image.open(path + subfolder + "/" + image_name).convert("L").resize(size) # convert to grayscale and resize
npimage = np.asarray(img)
images.append(npimage)
labels.append(classes.index(subfolder))
img.close()
# convertt to np arrays
images = np.asarray(images)
labels = np.asarray(labels)
# Normalize to [0, 1]
images = images / 255.0
return (images, labels)
(train_images, train_labels) = from_files_to_nparray(training_path)
(test_images, test_labels) = from_files_to_nparray(test_path)
最后,我有以下形状:
Train images shape : (8000, 128, 128)
Labels images shape : (8000,)
Test images shape : (2000, 128, 128)
Test images shape : (2000,)
训练完相同的模型(但最后一个密集层格式为2个神经元)后,我得到了这个结果,应该没问题:
Train images shape : (8000, 28, 28)
Labels images shape : (8000,)
Test images shape : (2000, 28, 28)
Test images shape : (2000,)
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten (Flatten) (None, 784) 0
_________________________________________________________________
dense (Dense) (None, 128) 100480
_________________________________________________________________
dense_1 (Dense) (None, 2) 258
=================================================================
Total params: 100,738
Trainable params: 100,738
Non-trainable params: 0
_________________________________________________________________
None
Epoch 1/5
2018-07-27 15:25:51.283117: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
8000/8000 [==============================] - 1s 66us/step - loss: 0.6924 - acc: 0.5466
Epoch 2/5
8000/8000 [==============================] - 0s 39us/step - loss: 0.6679 - acc: 0.5822
Epoch 3/5
8000/8000 [==============================] - 0s 41us/step - loss: 0.6593 - acc: 0.6048
Epoch 4/5
8000/8000 [==============================] - 0s 39us/step - loss: 0.6545 - acc: 0.6134
Epoch 5/5
8000/8000 [==============================] - 0s 39us/step - loss: 0.6559 - acc: 0.6039
2000/2000 [==============================] - 0s 33us/step
Test accuracy: 0.592
现在,问题是,如果我尝试将输入大小从28x28更改为例如128x128,则结果是这样的:
Train images shape : (8000, 128, 128)
Labels images shape : (8000,)
Test images shape : (2000, 128, 128)
Test images shape : (2000,)
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten (Flatten) (None, 16384) 0
_________________________________________________________________
dense (Dense) (None, 128) 2097280
_________________________________________________________________
dense_1 (Dense) (None, 2) 258
=================================================================
Total params: 2,097,538
Trainable params: 2,097,538
Non-trainable params: 0
_________________________________________________________________
None
Epoch 1/5
2018-07-27 15:27:41.966860: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
8000/8000 [==============================] - 4s 483us/step - loss: 8.0341 - acc: 0.4993
Epoch 2/5
8000/8000 [==============================] - 3s 362us/step - loss: 8.0590 - acc: 0.5000
Epoch 3/5
8000/8000 [==============================] - 3s 351us/step - loss: 8.0590 - acc: 0.5000
Epoch 4/5
8000/8000 [==============================] - 3s 342us/step - loss: 8.0590 - acc: 0.5000
Epoch 5/5
8000/8000 [==============================] - 3s 342us/step - loss: 8.0590 - acc: 0.5000
2000/2000 [==============================] - 0s 217us/step
Test accuracy: 0.5
为什么?尽管添加新的致密层或增加神经元数量,结果是相同的。
输入大小和模型层之间有什么联系?谢谢!
答案 0 :(得分:3)
问题是在第二个示例中您需要训练更多的参数。在第一个示例中,您只有100k参数。您用8k图像训练它们。
在第二个示例中,您有2000k参数,并尝试使用相同数量的图像训练它们。这是行不通的,因为自由参数和样本数之间存在关系。没有确切的公式可以计算这种关系,但是根据经验,应该有比可训练参数更多的样本。
您可以尝试用它来训练更多的纪元并了解其工作原理,但是一般来说,对于更复杂的模型,您需要更多的数据。