tensorflow 2.0:tf.GradientTape()。gradient()返回None

时间:2019-06-27 01:57:23

标签: python tensorflow machine-learning deep-learning eager-execution

我为自己的研究生研究设计了自己的损失函数,它计算损失的直方图与正态分布之间的距离。我正在关于虹膜花分类的Tensorflow 2.0 tutorial的设置中实现此损失功能。

我检查了损失值和类型,它们与教程中的相同,但是我的grads中的tape.gradient()None

这是通过以下方式在Google Colab中完成的:

TensorFlow version: 2.0.0-beta1

Eager execution: True

我的损失和梯度代码块:

def loss(model, x, y):
  y_ = model(x) # y_.shape is (batch_size, 3)
  losses = []
  for i in range(y.shape[0]):
    loss = loss_object(y_true=y[i], y_pred=y_[i])
    losses.append(float(loss))
  dis = get_distance_between_samples_and_distribution(losses, if_plot = 0)
  return tf.convert_to_tensor(dis, dtype=np.float32)

def grad(model, inputs, targets):
  with tf.GradientTape() as tape:
    loss_value = loss(model, inputs, targets)
    tape.watch(model.trainable_variables)
  return loss_value, tape.gradient(loss_value, model.trainable_variables)

loss_value, grads = grad(model, features, labels)
print("loss_value:",loss_value)
print("type(loss_value):", type(loss_value))
print("grads:", grads)
################################################# Output:
loss_value: tf.Tensor(0.21066944, shape=(), dtype=float32)
type(loss_value): <class 'tensorflow.python.framework.ops.EagerTensor'>
grads: [None, None, None, None, None, None]

教程中的代码为:

loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

def loss(model, x, y):
  y_ = model(x)
  return loss_object(y_true=y, y_pred=y_)

def grad(model, inputs, targets):
  with tf.GradientTape() as tape:
    loss_value = loss(model, inputs, targets)
    tape.watch(model.trainable_variables)
  return loss_value, tape.gradient(loss_value, model.trainable_variables)

loss_value, grads = grad(model, features, labels)
print("loss_value:",loss_value)
print("type(loss_value):", type(loss_value))
print("grads:", grads)
################################################# Output:
loss_value: tf.Tensor(0.56536925, shape=(), dtype=float32)
type(loss_value): <class 'tensorflow.python.framework.ops.EagerTensor'>
grads: [<tf.Tensor: id=9962, shape=(4, 10), dtype=float32, numpy=
array([[ 0.0000000e+00,  6.5984917e-01,  3.0700830e-01, -7.5234145e-01,
      ......

由于数据类型和形状相同,我觉得自定义损失的计算应该无关紧要,但是如果确实如此,这是我的损失函数:

def get_distance_between_samples_and_distribution(errors, if_plot = 1, n_bins = 5):
  def get_middle(x):
    xMid = np.zeros(x.shape[0]//2)
    for i in range(xMid.shape[0]):
      xMid[i] = 0.5*(x[2*i]+x[2*i+1])
    return xMid

  bins, edges = np.histogram(errors, n_bins, normed=1)
  left,right = edges[:-1],edges[1:]
  X = np.array([left,right]).T.flatten()
  Y = np.array([bins,bins]).T.flatten()
  X_middle = get_middle(X)
  Y_middle = get_middle(Y)
  distance = []
  for i in range(X_middle.shape[0]):
    dis = np.abs(scipy.stats.norm.pdf(X_middle[i])- Y_middle[i])
    distance.append(dis)
  distance2 = np.power(distance, 2)

  return sum(distance2)/len(distance2)

我搜索并尝试添加tape.watch(),检查了退货的缩进,但是他们没有解决此None问题。对于解决此问题的任何建议,我将不胜感激。谢谢!

tf.GradientTape的定义是here

1 个答案:

答案 0 :(得分:0)

原因是我的损失函数不可微,我对两个分布的相似性使用了另一种度量,现在可以了。