CNN - 模型预测所有相同的类

时间:2016-03-21 19:03:50

标签: networking theano convolution keras

我已经阅读了stackoverflow上类似问题的一些其他答案,但是在这种情况下我找不到任何帮助。我有一组539个RGB图像,尺寸为607 x 607 x 3,每个图像是6个类别中的一个。我在MNIST和CIFAR10数据集上取得了成功,但是当我为这个数据集创建CNN时,通过预测所有相同的类(可能会有所不同),训练时产生的测试val_acc保持不变/产生。下面我已经包含了我的代码和示例CNN,以及GPU上的输出:

from __future__ import absolute_import
from __future__ import print_function
import cPickle
import gzip
import numpy as np
import theano
import theano.tensor as T
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.datasets import mnist
from keras.optimizers import SGD, RMSprop
from keras.utils import np_utils, generic_utils
from theano.tensor.nnet import conv
from theano.tensor.nnet import softmax
from theano.tensor import shared_randomstreams
from theano.tensor.signal import downsample
from theano.tensor.nnet import sigmoid
from theano.tensor import tanh
import pylab as pl
import matplotlib.cm as cm
import os, struct
from array import array as pyarray
from numpy import append, array, int8, uint8, zeros,genfromtxt, matrix
from matplotlib.pyplot import imshow
from sklearn.cross_validation import train_test_split
from random import randint
import cv2

# Setting up the Data
A=539; 
l = float(genfromtxt("/home/silo1/ad2512/Histo_6/L" + str(1) + ".csv",delimiter=','))
l1 = float(genfromtxt("/home/silo1/ad2512/Histo_6/L" + str(2) + ".csv",delimiter=','))
d = cv2.imread('/home/silo1/ad2512/Histo_6/SI1.jpg')
d1 = cv2.imread('/home/silo1/ad2512/Histo_6/SI2.jpg')
all_data=[d,d1]
labels=[l,l1]
for i in range(A-2):
    if((i+3)>A):
        break
    l = float(genfromtxt("/home/silo1/ad2512/Histo_6/L" + str(i+3) + ".csv",delimiter=','))
    d = cv2.imread("/home/silo1/ad2512/Histo_6/SI" + str(i+3) + ".jpg")
    all_data.append(d)
    labels.append(l)

s = np.shape(all_data)[1]
all_data = np.asarray(all_data) 
all_data = all_data.astype('float32')
all_data = all_data.reshape(A,3,s,s)
labels = np.asarray(labels)
labels = labels.astype('int')
labels = np_utils.to_categorical(labels)


# Building Model
model = Sequential()
model.add(Convolution2D(32,3,3,init='uniform',border_mode='full',input_shape=(3,s,s)))
model.add(Activation('tanh'))
model.add(Convolution2D(32, 3, 3))
model.add(Activation('tanh'))
model.add(MaxPooling2D(pool_size=(3, 2)))
model.add(Dropout(0.25))
model.add(Convolution2D(64, 3, 3, border_mode='full'))
model.add(Activation('tanh'))
model.add(Convolution2D(64, 3, 3))
model.add(Activation('tanh'))
model.add(MaxPooling2D(pool_size=(3, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(500))
model.add(Activation('tanh'))
model.add(Dropout(0.25))
model.add(Dense(500))
model.add(Activation('tanh'))
model.add(Dropout(0.25))
model.add(Dense(6))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy', optimizer="RMSprop")
model.fit(all_data[0:200], labels[0:200], batch_size=10, nb_epoch=15,verbose=1,show_accuracy=True,validation_data=(all_data[400:539], labels[400:539]))

前9个时期的输出:

Epoch 1/15
200/200 [==============================] - 73s - loss: 2.6849 - acc: 0.2500 - val_loss: 1.6781 - val_acc: 0.3957
Epoch 2/15
200/200 [==============================] - 73s - loss: 2.0138 - acc: 0.1800 - val_loss: 2.1653 - val_acc: 0.2518
Epoch 3/15
200/200 [==============================] - 73s - loss: 1.8683 - acc: 0.2600 - val_loss: 1.7330 - val_acc: 0.2518
Epoch 4/15
200/200 [==============================] - 73s - loss: 1.8136 - acc: 0.2200 - val_loss: 2.1307 - val_acc: 0.1871
Epoch 5/15
200/200 [==============================] - 73s - loss: 1.7284 - acc: 0.2600 - val_loss: 1.6952 - val_acc: 0.2518
Epoch 6/15
200/200 [==============================] - 73s - loss: 1.7373 - acc: 0.2900 - val_loss: 1.6020 - val_acc: 0.2518
Epoch 7/15
200/200 [==============================] - 73s - loss: 1.6809 - acc: 0.3050 - val_loss: 1.6524 - val_acc: 0.2518
Epoch 8/15
200/200 [==============================] - 73s - loss: 1.7306 - acc: 0.3350 - val_loss: 1.7867 - val_acc: 0.1871
Epoch 9/15
200/200 [==============================] - 73s - loss: 1.7803 - acc: 0.2400 - val_loss: 1.8107 - val_acc: 0.2518

我尝试改变隐藏层中的节点数量,创建更复杂的模型,更改激活函数,我能想到的一切。如果我通过这个运行CIFAR10数据集(并将最后一层更改为Dense(10)而不是Dense(6))我得到了成功的结果 - 不确定我导入的数据是否存在问题,但是np我的数据的.shape结构与CIFAR10数据集的np.shape结构完全相同

1 个答案:

答案 0 :(得分:0)

val_acc不是常数,它来回跳跃。尝试将学习率降低10倍然后再降低10倍,直到开始学习为止。您必须创建一个RMSProp对象并传入该对象而不是字符串。