在Keras的自定义损失函数中访问y_true和y_pred内部数据的正确方法是什么(Tensorflow Backend)

时间:2019-04-03 15:17:10

标签: python tensorflow keras loss-function

我正在使用Tensorflow后端在Keras中创建自定义损失。

提取数组的内部损失函数不起作用

(这是我的第一篇文章,我正在努力达到标准:))

我的输入是一个80x80x1矩阵的数组(作为x_train)

损失函数应该对预测参数和输入进行一些计算。 计算的结果是使损失最小化。

问题:

自定义损失内的y_truey_true形状:[批80 80 1]

问题是,如何从y_true取回80x80。以下代码不起作用:

extracted_data_80x80 = y_true[ 0, :, :, 0]

相应的错误消息如下:

ValueError: Index out of range using input dim 2; input has only 2 dims for 'loss/dense_3_loss/strided_slice_7' (op: 'StridedSlice') with input shapes: [?,?], [4], [4], [4] and with computed input tensors: input[3] = <1 1 1 1>.

示例代码:

# -------
# Import
# -------
import tensorflow as tf
import numpy as np

# -----------------------
# Generating random input
# 100 pices of 8x8 array
# -----------------------
X_train = np.random.rand(10000,8,8)
y_train = np.random.rand(10000,8,8)

# Here we add an additional Dimension to be able to work with
# Keras Conv2D (maybe this step is the origin of the problem)
X_train = np.expand_dims(X_train, axis=4)
y_train = np.expand_dims(y_train, axis=4)


# -----------------
# The Loss Function
# -----------------
def CustomLoss(y_true, y_pred):
    #y_true[batch, 8, 8, 1] 

    # Extracting the 8x8 part from y_true
    extracted_data_80x80  = y_true[0, :, :, 0] # This is NOT working   

    return tf.reduce_sum(extracted_data_80x80 )


# -----------
# Keras Model
# -----------
from keras.models import Sequential
from keras.layers import Dense, Conv2D, Flatten

model = Sequential()

# add model layers
model.add(Conv2D(64, kernel_size=3, activation="relu", input_shape=(8, 8, 1),data_format="channels_last"))
model.add(Flatten())
model.add(Dense(3, activation="relu"))

# compile model using *CustomLoss* as loss
model.compile(optimizer="adam", loss = CustomLoss)

#train the model
model.fit(X_train, y_train, epochs=100, batch_size=2, verbose=1)

所以我真的很伤人,这里是提取Tensor的80x80部分的正确方法是什么?

感谢您的时间,帮助和挽救我的生命:)

0 个答案:

没有答案