无法训练深度自动编码器神经元网络

时间:2016-11-23 09:22:52

标签: python deep-learning keras

我尝试按照以下方式训练神经元网络,但无法获得良好的结果。 数据集是mnist。你可以看到损失不低, 我尝试不同的优化器(Adagrad,SGD,Adam),学习率不同,不起作用。 有没有办法获得更好的结果?

日志

大纪元75/100 60000/60000 [==============================] - 271s - 损失:1191.9388
大纪元76/100 60000/60000 [==============================] - 232s - 损失:1191.7773
大纪元77/100 60000/60000 [==============================] - 232s - 损失:1191.6079
大纪元78/100 60000/60000 [==============================] - 207s - 损失:1191.4511
大纪元79/100 60000/60000 [==============================] - 205s - 损失:1191.2935
大纪元80/100 60000/60000 [==============================] - 223s - 损失:1191.1510
大纪元81/100 60000/60000 [==============================] - 243s - 损失:1191.0016
大纪元82/100 60000/60000 [==============================] - 224s - 损失:1190.8688
大纪元83/100 60000/60000 [==============================] - 214s - 损失:1190.7299
大纪元84/100 60000/60000 [==============================] - 283s - 损失:1190.5929
大纪元85/100 60000/60000 [==============================] - 243s - 损失:1190.4609

auto-encoder network structure

from keras.datasets import mnist
from keras.models import load_model, Model, Sequential
from keras.layers import Input, Dense, Activation
import matplotlib.pyplot as plt
import numpy as np
import keras
import theano
import os

(x_train, y_train), (x_test, y_test) = mnist.load_data()
nx_train=np.reshape(x_train, (x_train.shape[0],x_train.shape[1]*x_train.shape[2]))

if os.path.isfile('deep_auto_encoder_example.h5'):
    model=load_model('deep_auto_encoder_example.h5')
    inputs = model.inputs[0]
    encoded = model.layers[4].output
else:
    inputs = Input(shape=(784,))
    en_hid1 = Dense(1000, activation='linear')(inputs)
    en_hid2 = Dense(500, activation='linear')(en_hid1)
    en_hid3 = Dense(250, activation='linear')(en_hid2)
    encoded = Dense(30, activation='linear')(en_hid3)
    de_hid1 = Dense(250, activation='linear')(encoded)
    de_hid2 = Dense(500, activation='linear')(de_hid1)
    de_hid3 = Dense(1000, activation='linear')(de_hid2)
    decoded = Dense(784)(de_hid3)
    model = Model(input=inputs, output=decoded)
    sgd = keras.optimizers.Adagrad(lr=0.0001)
    model.compile(loss='mean_squared_error', optimizer=sgd)
    model.fit(nx_train, nx_train, batch_size=32, nb_epoch=100)
    model.save('deep_auto_encoder_example.h5')

import scipy.misc
from PIL import Image
para_imnum = 10
imarray=model.predict(nx_train[0:para_imnum**2])
ll = []
for i in range(para_imnum**2):
    im = np.reshape(imarray[i],(28,28))
    if i ==0:
        ll = im
    else:
        ll = np.concatenate((ll,im), axis=1)

lh = []
ims = np.reshape(np.transpose(ll),(para_imnum,28*para_imnum,28))
for i in range(para_imnum):
    if i==0:
        lh = np.transpose(ims[i])
    else:
        lh = np.concatenate((lh,np.transpose(ims[i])),axis=0) 

scipy.misc.imsave('deep_auto_encoder_predict.jpg',lh)


encoder = Model(input=inputs, output=encoded)

import tsne
cor_xy = tsne.tsne(encoder.predict(nx_train[:2000])/1000)
cor_x =[]
cor_y =[]
point_c = []
c = np.arange(10)/10.
for i in range(cor_xy.shape[0]):
    cor_x.append(cor_xy[i][0])
    cor_y.append(cor_xy[i][1])
    point_c.append(c[y_train[i]])
#print(pp)
plt.scatter(cor_x, cor_y, c=point_c)
plt.savefig('deep_tsne')

#plt.show()

1 个答案:

答案 0 :(得分:0)

知道它迟到了,以防万一sb回来了。尝试不同的非线性和不同的优化器。此外,您可以先从较少的图层开始,然后在训练简单图层后开始添加更多图层。您还可以在此类设置中冻结训练图层的权重。