阈值函数-没有为任何变量提供梯度

时间:2018-09-26 14:34:02

标签: python tensorflow

我想要实现的是,获取神经网络的输出层,对该值进行归一化后执行阈值操作,成本函数将是二进制输出层值和我的类标签的差。我不断收到上述错误-没有为任何变量提供渐变

这是代码的输入部分:

# Input and Expected Output of the neural networks
xs = tf.placeholder("float32", [None, n_features], name='XtoNN')
ys = tf.placeholder("float32", [None, 1], name='YfromNN')

# Hidden Layer
weightsH = tf.Variable(tf.truncated_normal([n_features, neurons_in_hlayer], mean=0,
                                     stddev=1 / np.sqrt(n_features)), name='weights1')
biasesH = tf.Variable(tf.truncated_normal([neurons_in_hlayer],mean=0, stddev=1 / np.sqrt(n_features)), name='biases1')
yValH = tf.nn.sigmoid(tf.add(tf.matmul(xs, weightsH),biasesH), name='activationLayer1')


# Output Layer
WeightsO = tf.Variable(tf.truncated_normal([neurons_in_hlayer, n_classes], mean=0, stddev = 1/np.sqrt(n_features)),
                                           name='weightsOut')
biasesO = tf.Variable(tf.truncated_normal([n_classes], mean=0, stddev=1 / np.sqrt(n_features)), name='biasesOut')
yPred = tf.cast(tf.add(tf.matmul(yValH, WeightsO), biasesO), tf.float32)

# Cost function
redYPred = tf.div(tf.subtract(yPred, tf.reduce_min(yPred)),
                  tf.subtract(tf.reduce_max(yPred), tf.reduce_min(yPred)))
binaryYPred = tf.cast(tf.to_int32(redYPred > tf.reduce_mean(redYPred)), tf.float32)
cost = tf.reduce_mean(tf.square(binaryYPred-ys, name='Cost'))

# Optimizer
train = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)

模型的会话:

startTime = datetime.now()
# Session
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    # sess.run(tf.local_variables_initializer())
    saver = tf.train.Saver()
    for i in range(training_epochs):
        for j in range(n_samples):
            # Run NN
            sess.run([cost, train], feed_dict={xs: X_train[j, :].reshape(1, n_features),
                                               ys: Y_train[j].reshape(1,n_classes)})
        currentEpochCost = sess.run(cost, feed_dict={xs: X_train, ys: Y_train})
        print('Epoch ', (i+1), ': Cost = ', currentEpochCost)

    timeTaken = datetime.now() - startTime
    print('Time Taken: ', timeTaken)

    yTestPredict = sess.run(binaryYPred, feed_dict={xs: X_test})

1 个答案:

答案 0 :(得分:0)

之所以发生这种情况,是因为您要在计算中添加一个固有的不可微分运算(硬阈值)。由于阈值没有梯度,因此没有梯度可以通过您的网络反向传播。

是否有特定原因不能使用softmax将输出分配给2个输出类之一?从某种意义上说,它正在执行您要实现的目标。

一旦您对网络进行了培训,并且两类的输出分别为97%和3%,就可以很简单地将测试/使用时间内的输出二值化。