我正在尝试在svhn数据集上运行VGG16模型进行数字检测
http://ufldl.stanford.edu/housenumbers/train_32x32.mat
但是,预测值总是相同的。 我尝试用 -
提供图像从0-255缩放到0-1
从每张图片中减去均值
除以std
以下是我如何运行它:
初始化VGG16:
vgg = tf.keras._impl.keras.applications.vgg16.VGG16
(include_top=True,
weights=None,
input_tensor=None,
input_shape=(64,64,3),
pooling='max',
classes=10)
vgg.compile(loss=tf.keras.losses.categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(lr=1e-4, momentum=0.9, nesterov=True),metrics=['mae', 'acc'])
#Here train_data shape is (None,64,64,3) and labels_data shape is (None,10) and are one hot like labels
vgg.fit(train_data, labels_data, epochs=5, batch_size=96)
列车数据可以这样读取和预处理:
train_data = sio.loadmat('datasets/housing/train_32x32.mat')
我用于预处理train_data的两个函数:
import numpy as np
import cv2
import tensorflow as tf
import os
import scipy
from skimage import data, io, filters
import scipy.io as sio
from utils import *
import h5py
from tensorflow.python.keras._impl.keras import backend as K
from tensorflow.python.keras._impl.keras.applications.imagenet_utils import _obtain_input_shape
from tensorflow.python.keras._impl.keras.applications.imagenet_utils import decode_predictions # pylint: disable=unused-import
from tensorflow.python.keras._impl.keras.applications.imagenet_utils import preprocess_input # pylint: disable=unused-import
from tensorflow.python.keras._impl.keras.engine.topology import get_source_inputs
from tensorflow.python.keras._impl.keras.layers import Conv2D
from tensorflow.python.keras._impl.keras.layers import Dense
from tensorflow.python.keras._impl.keras.layers import Flatten
from tensorflow.python.keras._impl.keras.layers import GlobalAveragePooling2D
from tensorflow.python.keras._impl.keras.layers import GlobalMaxPooling2D
from tensorflow.python.keras._impl.keras.layers import Input
from tensorflow.python.keras._impl.keras.layers import MaxPooling2D
from tensorflow.python.keras._impl.keras.models import Model
from tensorflow.python.keras._impl.keras.utils import layer_utils
from tensorflow.python.keras._impl.keras.utils.data_utils import get_file
from tensorflow.python.keras._impl.keras.engine import Layer
def reshape_mat_vgg (QUANTITY,matfilepath='datasets/housing/train_32x32.mat', type="non-quantized", size=(64,64)):
data = read_mat_file (matfilepath)
train = data['X'][:,:,:,0:QUANTITY]
train = np.swapaxes(np.swapaxes(np.swapaxes(train,2,3),1,2),0,1)
labels = data['y'][0:QUANTITY]
labels_data = []; labels_data = np.array(labels_data)
train_data = np.zeros((QUANTITY,size[0],size[1],3))
print "Reorganizing Data..."
for i in range(QUANTITY):
image_i = np.copy(train[i,:,:,:])
image_i = preprocess_small_vgg16 (image_i, new_size=size, type=type)
train_data[i,:,:,:] = image_i
label_i = np.zeros((10)); label_i[labels[i]-1] = 1.0; label_i = label_i.reshape(1,10)
if i == 0:
labels_data = np.vstack(( label_i ))
else:
labels_data = np.vstack(( labels_data, label_i ))
if i % 1000 == 0:
print i*100/(QUANTITY-1),"percent done..."
print "100 percent done..."
return train_data, labels_data
def preprocess_small_vgg16 (image, new_size=(64,64), type="non-quantized"):
img = np.copy (image)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
#Whitening
imgnorm = img * 255.0/np.max(img)
#Normalisation
gray = imgnorm
borderless = gray[:,5:gray.shape[1]-5,:] # centering
final_img = (cv2.resize (borderless, new_size))
final_img = final_img/np.max(final_img) #scaling 0-1
stddev = np.std(final_img); mn = np.mean (final_img)
final_img = (final_img - mn) / stddev #standardizing
return final_img
输出:
Epoch 1/10
5000/5000 [==============================] - 1346s - loss: 3.2877 -
mean_absolute_error: 0.0029 - acc: 0.1850
Epoch 2/10
运行多个时代并没有帮助。我尝试了5个时代。
当我检查输出或预测时,它显示所有输入的相同结果,例如(使用np.argmax(pred,axis = -1)转换):
[3 3 3 . . . 3 3 3]
请在我的模型中标记问题。
答案 0 :(得分:1)
我认为你的大多数问题是5个时代远远不足以使VGG权重收敛。在VGG原始论文中,作者说他们训练了370k次迭代(74个时期)以获得最佳结果,因此您可以考虑给它更多时间来收敛。
我已将您的数据集提供给on of tensorflow tutorial nets,您可以看到经过数千次迭代后总损失的减少情况。
当然,你可以微调学习率衰减,以防止在高原上花费的时间。