优化程序将错误最小化:“ float”对象没有属性“ dtype”

时间:2019-09-02 11:40:29

标签: python tensorflow optimization gradient dtype

我是tensorflow的初学者。使用Tensorflow 2.0进行梯度计算时存在一些问题。有人可以帮我吗?

这是我的代码。错误提示是:

if not t.dtype.is_floating:
AttributeError: 'float' object has no attribute 'dtype'

我尝试过:

w = tf.Variable([1.0,1.0],dtype = tf.float32)

邮件更改为:

TypeError: 'tensorflow.python.framework.ops.EagerTensor' object is not callable
import tensorflow as tf
import numpy as np
train_X = np.linspace(-1, 1, 100)
train_Y = 2 * train_X + np.random.randn(*train_X.shape) * 0.33 + 10

# w = tf.Variable([1.0,1.0],dtype = tf.float32)
w = [1.0,1.0]https://www.cybertec-postgresql.com/en/?p=9102&preview=true
opt=tf.keras.optimizers.SGD(0.1)
mse=tf.keras.losses.MeanSquaredError()
for i in range(20):
    print("epoch:",i,"w:", w)
    with tf.GradientTape() as tape:
        logit = w[0] * train_X + w[1]
        loss= mse(train_Y,logit)
    w = opt.minimize(loss, var_list=w)

我不知道如何解决。谢谢您的任何评论。

1 个答案:

答案 0 :(得分:0)

您没有正确使用GradientTape。我已经演示了代码,您应该如何应用它。 我创建了一个模拟您的w变量的单一单元密集层模型。

import tensorflow as tf
import numpy as np
train_X = np.linspace(-1, 1, 100)
train_X = np.expand_dims(train_X, axis=-1)
print(train_X.shape)    # (100, 1)
train_Y = 2 * train_X + np.random.randn(*train_X.shape) * 0.33 + 10
print(train_Y.shape)    # (100, 1)

# First create a  model with one unit of dense and one bias
input = tf.keras.layers.Input(shape=(1,))
w = tf.keras.layers.Dense(1)(input)   # use_bias is True by default
model = tf.keras.Model(inputs=input, outputs=w)

opt=tf.keras.optimizers.SGD(0.1)
mse=tf.keras.losses.MeanSquaredError()

for i in range(20):
    print('Epoch: ', i)
    with tf.GradientTape() as grad_tape:
        logits = model(train_X, training=True)
        model_loss = mse(train_Y, logits)
        print('Loss =', model_loss.numpy())

    gradients = grad_tape.gradient(model_loss, model.trainable_variables)
    opt.apply_gradients(zip(gradients, model.trainable_variables))