我正在尝试为我的深度学习模型创建自定义损失函数,但遇到错误。
在这里我将给出一个代码示例,该代码不是我想要使用的代码,但是如果我了解如何使这个小损失函数起作用,那么我认为我将能够使我的长损失函数起作用。因此,我几乎在寻求帮助以使下一个功能正常工作。
model.compile(optimizer='rmsprop',loss=try_loss(pic_try), metrics=
['accuracy'])
def try_loss(pic):
def try_2_loss(y_true,y_pred):
return tf.py_function(func=try_3_loss,inp=[y_pred,pic], Tout=tf.float32)
return try_2_loss
def try_3_loss(y_pred,pic):
return tf.reduce_mean(pic)
现在我想知道以下内容: 1.我要输入到model.compile行的图片是否需要张量?可以是一个numpy数组吗? 2.在try_3_loss函数中,可以将tf.reduce_mean替换为np.mean吗? 3.在try_3_loss函数中,可以在y_pred上使用普通的numpy命令,例如np.mean(y_pred)吗?
我的主要事情是我想使用尽可能多的numpy命令。
我试图使用各种各样的东西,我试图让我的图片成为一个numpy数组,我试图与try_3_loss函数中的np.mean(pic)一起使用,我试图让我的图片成为张量对象,然后在try_3_project中使用tf.reduce_mean,然后在运行model.compile行之前尝试做sess.run(pic),在上述所有情况下,我都会遇到以下错误:
TypeError Traceback (most recent call
last)
<ipython-input-75-ff45de7120bc> in <module>()
----> 1 model.compile(optimizer='rmsprop',loss=try_loss(pic_try),
metrics=['accuracy'])
1 frames
/usr/local/lib/python3.6/dist-packages/keras/engine/training.py in
compile(self, optimizer, loss, metrics, loss_weights, sample_weight_mode,
weighted_metrics, target_tensors, **kwargs)
340 with K.name_scope(self.output_names[i] +
'_loss'):
341 output_loss = weighted_loss(y_true, y_pred,
--> 342 sample_weight,
mask)
343 if len(self.outputs) > 1:
344 self.metrics_tensors.append(output_loss)
/usr/local/lib/python3.6/dist-packages/keras/engine/training_utils.py in
weighted(y_true, y_pred, weights, mask)
418 weight_ndim = K.ndim(weights)
419 score_array = K.mean(score_array,
--> 420 axis=list(range(weight_ndim,
ndim)))
421 score_array *= weights
422 score_array /= K.mean(K.cast(K.not_equal(weights, 0),
K.floatx()))
TypeError: 'NoneType' object cannot be interpreted as an integer
答案 0 :(得分:0)
一些测试代码:
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model
from tensorflow.keras import backend as K
@tf.custom_gradient
def py_loss_fn(y_true, y_pred):
""" This function takes eager tensors as inputs which can be explicitly
converted to np.arrays via EagerTensor.numpy() or implicitly converted
by applying numpy operations to them.
However, once tf operations are no longer used it means that the function has to
implement its own gradient function.
"""
def grad(dy):
""" Compute gradients for function inputs.
Ignore input[0] (y_true) since that is model.targets[0]
"""
g = np.mean(-dy * np.sign(y_true - y_pred), axis=1)[:, np.newaxis]
return None, g
return np.mean(np.abs(y_true - y_pred), axis=1), grad
def eager_loss_fn(y_true, y_pred):
""" If tf operations are used on eager tensors auto diff works without issues
"""
return tf.reduce_mean(tf.abs(y_true - y_pred))
def loss_fn(y_true, y_pred, **kw_args):
""" This function takes tensors as inputs. Numpy operations are not valid.
"""
# loss = tf.py_function(eager_loss_fn, inp=[y_true, y_pred], Tout=tf.float32)
loss = tf.py_function(py_loss_fn, inp=[y_true, y_pred], Tout=tf.float32)
return loss
def make_model():
""" Linear regression model with custom loss """
inp = Input(shape=(4,))
out = Dense(1, use_bias=False)(inp)
model = Model(inp, out)
model.compile('adam', loss_fn)
return model
model = make_model()
model.summary()
测试代码以调用模型:
import numpy as np
FACTORS = np.arange(4) + 1
def test_fn(x):
return np.dot(x, FACTORS.T)
X = np.random.rand(3, 4)
Y = np.apply_along_axis(test_fn, 1, X)
history = model.fit(X, Y, epochs=1000, verbose=False)
print(history.history['loss'][-1])
答案 1 :(得分:0)
非常感谢您的帮助!我实际上决定切换到tf 2.0,并且编写函数要容易得多,尽管就效率而言它有点贵,但我总是可以很容易地从np数组切换到张量,然后再返回,所以我只是以numpy数组格式编写了所有内容然后将其切换回去因此,所有函数的输入和输出都是张量,但是在函数内部,我将其切换为numpy数组,在返回之前,我将其切换回张量,但是仍然有错误。代码如下:
model.compile(optimizer='rmsprop',loss=custom_loss(pic),
loss_weights=[None],metrics=['accuracy'])
def my_loss(y_true, y_pred):
return loss(y_pred,pic)
def custom_loss(pic):
return my_loss
当我实际尝试运行损失函数时(不在model.compile中)是这样的:
my_loss(x0,x0)
我得到以下信息:
orig shape x: (1, 2501)
shape x: (2501,)
shape pic: (100, 100)
shape a: ()
shape ms: (2500,)
r_size: 50
c_size: 50
<tf.Tensor: id=261, shape=(), dtype=float64, numpy=6.741635588952273>
所以我确实得到了张量的输出,并带有我想要的损失。 (打印出来的东西将有助于理解错误)但是,当我尝试运行compile命令时,却得到了这样的信息:
orig shape x: ()
(...a bunch of unneccessary stuff...)
----> 4 x=np.reshape(x,(2501,1))
5 x=np.reshape(x,(2501,))
6 pic=np.array(pic)
/usr/local/lib/python3.6/dist-packages/numpy/core/fromnumeric.py in reshape(a,
newshape, order)
290 [5, 6]])
291 """
--> 292 return _wrapfunc(a, 'reshape', newshape, order=order)
293
294
/usr/local/lib/python3.6/dist-packages/numpy/core/fromnumeric.py in
_wrapfunc(obj, method, *args, **kwds)
54 def _wrapfunc(obj, method, *args, **kwds):
55 try:
---> 56 return getattr(obj, method)(*args, **kwds)
57
58 # An AttributeError occurs if the object does not have
ValueError: cannot reshape array of size 1 into shape (2501,1)
就像编译器不了解y_pred会具有我模型输出的大小一样。
我的模特:
model = tf.keras.Sequential()
#add model layers
model.add(layers.Conv2D(64, kernel_size=3,activation='linear',input_shape=
(inputs_shape_0,inputs_shape_1,1)))
#model.add(LeakyReLU(alpha=0.3))
model.add(layers.Conv2D(32, kernel_size=3,activation='linear'))
#model.add(LeakyReLU(alpha=0.3))
model.add(layers.Flatten())
model.add(layers.Dense(2501, activation='linear'))
任何想法如何解决?我还将查看您发送给我的测试代码以了解想法。
谢谢!