ValueError:在GradientDescentOptimizer中没有要优化的变量

时间:2019-08-26 04:15:21

标签: python tensorflow

我正在尝试为线性回归制作一个简单的tensorflow 2.0代码

import tensorflow as tf
import numpy as np

x = tf.random.uniform([3,10])
coeff = tf.constant([[1.,2.,3.]])
intercept = 5. 

def calcy(x=x, coeff=coeff, intercept=intercept):
    return tf.linalg.matmul(coeff, x)+intercept     

y = calcy()

@tf.function
def train(x=x, y=y):
    train_coeff = tf.Variable([[0,0,0]], dtype = tf.float32)
    train_intercept = tf.Variable(0, dtype = tf.float32)

    result_y = calcy(x, train_coeff, train_intercept) 

    loss = tf.math.reduce_mean(tf.math.square(result_y-y))
    for _ in range(10):
        tf.compat.v1.train.GradientDescentOptimizer(0.5).minimize(loss)

train()

它返回ValueError:没有要优化的变量。

1 个答案:

答案 0 :(得分:0)

我更改了火车的一部分,现在可以了

train_coeff = tf.Variable([[0,0,0]], dtype = tf.float32)
train_intercept = tf.Variable(0, dtype = tf.float32)

optimizer = tf.keras.optimizers.Adam()

@tf.function
def train(x=x, y=y):
    with tf.GradientTape() as tape:
        result_y = calcy(x, train_coeff, train_intercept)
        loss = tf.math.reduce_mean(tf.math.square(result_y-y))
        gradient = tape.gradient(loss, (train_coeff, train_intercept))
        optimizer.apply_gradients(zip(gradient, (train_coeff, train_intercept)))


for _ in range(10000):
    train()