自动编码器:损耗不变

时间:2019-05-03 18:26:49

标签: python tensorflow

培训推荐系统的自动编码器时,每个时期的损失均保持不变。此外,对于一个用户仅对少量项目进行评分,输入实际上是稀疏的。

我已经改变了学习速度和输入单元的大小,但是没有用。

#graph part
#setup placeholder and variables
x = tf.placeholder(dtype=tf.float32, shape=[None, input_dim], name='x')

x_ = tf.placeholder(dtype=tf.float32, shape=[None, input_dim], name='x_')

pos = tf.placeholder(dtype=tf.float32, shape=[None, input_dim], name='pos')

enc_w = tf.Variable(tf.truncated_normal([input_dim, hidden_dim], dtype=tf.float32))

enc_b = tf.Variable(tf.truncated_normal([hidden_dim], dtype=tf.float32))

dec_w = tf.transpose(enc_w)

dec_b = tf.Variable(tf.truncated_normal([input_dim], dtype=tf.float32))

#setup network

encoded = tf.nn.sigmoid(tf.matmul(x, enc_w) + enc_b, name='encoded')

decoded = tf.nn.sigmoid(tf.matmul(encoded, dec_w) + dec_b, name='decoded')

real_decoded=tf.multiply(decoded,pos) #
#setup loss

loss = tf.sqrt(tf.reduce_mean(tf.square(tf.subtract(x_, real_decoded)))) 
#loss += 5e-5 * (tf.nn.l2_loss(enc_b)+ tf.nn.l2_loss(dec_b)+ tf.nn.l2_loss(enc_w))
#optimizer
global_step = tf.Variable(0, trainable=False)

starter_learning_rate = 0.001

learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step,
                                           100000, 0.96, staircase=False)

optimizer=tf.train.AdamOptimizer(learning_rate).minimize(loss)

#decay_steps might need to be changed across training

0 个答案:

没有答案