在Tensorflow中训练稀疏自动编码器

时间:2018-07-29 14:25:22

标签: tensorflow nan autoencoder loss

尝试训练稀疏的自动编码器,但从第一个时期开始,其丢失显示为“ nan”。训练数据集在0到1之间归一化。

##cost function

prediction=decoder_res #output from decoder
actual=x

cost_MSE=tf.reduce_mean(tf.pow(actual-prediction,2))

#Weight_decay part

cost_regul=tf.reduce_sum(tf.square(W["W1"]))+tf.reduce_sum(tf.square(W["W2"]))

#sparsity cost
rho_j=tf.reduce_mean((encoder_res),axis=0)
print(rho_j.shape)

cost_sparse=tf.reduce_sum(sparse_param*tf.log(sparse_param/rho_j)+ (1- 
sparse_param)*tf.log((1-sparse_param)/(1-rho_j)))

#print(cost_sparse)

cost_fn=cost_MSE+(lamd/2)*cost_regul+beta*cost_sparse
#print(cost_fn.shape)

我尝试使用其他优化器,但仍然会造成损失:nan

网络参数如下:

#netwok parametrs
sparse_param=0.05
lamd = 0.05
beta=1
num_inputs=2048
num_h1=1000

优化:

optim=tf.train.GradientDescentOptimizer(learning_rate=l_r)
training=optim.minimize(cost_fn)

培训代码:

saver=tf.train.Saver()

##initialize all the variables
init=tf.global_variables_initializer()

#start training session
loss_vector=[]
sess = tf.Session()
sess.run(init)
for epoch in range(total_epochs):
    epoch_loss=0
    i=0
while(i<len(final_data)): #final data is training dataset
    #print('we are inside the session now')
    start=i
    end=i+batch_size
    batch_x=np.array(final_data[start:end])
    sess.run(training,feed_dict={x:batch_x})
    loss,_=sess.run([cost_fn,training],feed_dict={x:batch_x})
    epoch_loss=epoch_loss+loss
    i=i+batch_size
loss_vector.append(epoch_loss)    
print('epoch',epoch+1,'is completed out of',total_epochs,'Loss::',epoch_loss)  

期待提供任何帮助。在此先感谢:)

0 个答案:

没有答案