我想解决我的优化问题,但优化器不起作用。我期望一组解决方案与给定值的差异最小,但我的计算softmax函数始终为1,因为迭代中的权重和偏差不会更新。它们都是零的张量。我该如何解决这个问题?
#KI-Model
x = tf.placeholder(tf.float32, [None, 5], name='input') #x_1-x_5
#Init
W = tf.Variable(tf.zeros([5,1]), dtype=tf.float32)
b = tf.Variable(tf.zeros([1]), dtype=tf.float32)
#Sigmoid
y = tf.nn.softmax(tf.matmul(x, W) + b)
#Training
y_tensor = tf.placeholder(tf.float32, [None, 1], name='output')
loss = y-y_tensor
cost = tf.square(loss)
optimizer = tf.train.GradientDescentOptimizer(0.003).minimize(cost)
#Start
session = tf.Session()
init = tf.global_variables_initializer()
session.run(init)
#init first 1000 training_batches
for i in range(1000):
batch_xs.append([dataA[i], dataB[i], dataC[i], dataD[i],
dataE[i]])
batch_ys.append([solution[i]])
for i in range(10000):
session.run(optimizer, feed_dict={x:batch_xs, y_tensor:batch_ys})
print(session.run(y, feed_dict={x:batch_xs, y_tensor:batch_ys}) )
答案 0 :(得分:0)
您没有计算和应用渐变。 这些行已经丢失:
hideSoftKeyboard(context,dialog_box_mobile)
您还需要在每次迭代时运行火车步骤,使用以下行:
gradients = optimizer.compute_gradients(loss=cost)
train_step = optimizer.apply_gradients(grads_and_vars=gradients)
完整正确的代码:
session.run(train_step, feed_dict={x:batch_xs, y_tensor:batch_ys})