我必须设计一个网络,在该网络中应预测本征向量的方向尽可能接近地面真实情况。我正在使用心脏MRI数据,并且我已经具有计算每个体素的本征向量的功能。我的问题是,该函数需要更多的输入,而不仅仅是y_pred和y_true才能计算特征向量。我的y_true / y_pred具有尺寸(841,336,336,33)(每个尺寸的含义:(索引,x方向,y方向,通道))。现在,我需要在任何给定时间知道索引,以便可以将正确的索引传递给其他数据。 有什么方法可以知道损失函数当前正在查看哪个图像(841图像)?
答案 0 :(得分:0)
编写考虑到附加数据和特征向量计算功能的自定义损失函数可能会为您提供最佳服务。特别是,我相信您应该能够在“ y_true”数组中传递其他信息(在同一索引处),然后根据需要对其进行切片(使用tensorflow功能以分离出各个组件。)这是一个示例展示了这个想法。请注意,顶部仅允许在Google Colab(CPU)上获得可重复的结果。主代码在注释后面:“#其余代码跟随...”我希望这会有所帮助。
# Install TensorFlow
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
print(tf.__version__)
print(tf.executing_eagerly())
# Setup repro section from Keras FAQ with TF1 to TF2 adjustments
import numpy as np
import random as rn
# The below is necessary for starting Numpy generated random numbers
# in a well-defined initial state.
np.random.seed(42)
# The below is necessary for starting core Python generated random numbers
# in a well-defined state.
rn.seed(12345)
# Force TensorFlow to use single thread.
# Multiple threads are a potential source of non-reproducible results.
# For further details, see: https://stackoverflow.com/questions/42022950/
session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1,
inter_op_parallelism_threads=1)
# The below tf.set_random_seed() will make random number generation
# in the TensorFlow backend have a well-defined initial state.
# For further details, see:
# https://www.tensorflow.org/api_docs/python/tf/set_random_seed
tf.compat.v1.set_random_seed(1234)
sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf)
tf.compat.v1.keras.backend.set_session(sess)
# Rest of code follows ...
# Custom Loss
def my_custom_loss(y_true, y_pred):
tf.print('inside my_custom_loss:')
tf.print('y_true:')
tf.print(y_true)
tf.print('y_true column 0:')
tf.print(y_true[:,0])
tf.print('y_true column 1:')
tf.print(y_true[:,1])
tf.print('y_pred:')
tf.print(y_pred)
y_zeros = tf.zeros_like(y_pred)
y_mask = tf.math.greater(y_pred, y_zeros)
res = tf.boolean_mask(y_pred, y_mask)
logres = tf.math.log(res)
finres = tf.math.reduce_sum(logres)
return finres
# Define model
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(1, activation='linear', input_dim=1, name="Dense1"))
model.compile(optimizer='rmsprop', loss=my_custom_loss)
print('model.summary():')
print(model.summary())
# Generate dummy data
data = np.array([[2.0],[1.0],[1.0],[3.0],[4.0]])
labels = np.array([[[2.0],[1.0]],
[[0.0],[3.0]],
[[0.0],[3.0]],
[[0.0],[3.0]],
[[0.0],[3.0]]])
# Train the model.
print('training the model:')
print('-----')
model.fit(data, labels, epochs=1, batch_size=5)
print('done training the model.')
print(data.shape)
print(labels.shape)
a = model.predict(data)
print(a)