张量流中的保真度自定义损失函数-输入形状问题

时间:2019-09-12 14:08:30

标签: python tensorflow

作为多目标基本神经网络(到目前为止的GRU)的一部分,我无法输入自定义损失函数。

我有一个多目标函数,该函数将一个展平的(,16)形状(4x4)矩阵(“输入矩阵”)作为输入,并生成10个4x4矩阵的序列(通过展平的输出形状表示(160)) )作为使用“ mse”损失的首要目标。该模型的这一部分可以将data.lab作为训练数据,将inputreshape作为标签数据可以正常工作。我需要作为第二个目标,但是将此(160)张量转换为(10,4,4)张量(重塑为10 4x4矩阵),然后将它们叠加在一起以获得“(4,4)矩阵”矩阵”(通过productlayer定制层完成)。然后需要将此输出矩阵输入到函数中,以通过与来自data.lab的原始输入进行比较,通过自定义损失函数“ fidelity2”来计算“保真度”。我是通过第二次输入data.lab来完成此操作的,但现在将其作为标签。但是(1)model.summary()似乎显示productlayer的输出是(4,4),而我认为应该是(None,4,4),而(2)我收到错误:

InvalidArgumentError: In[0] is not a matrix. Instead it has shape [10,4,4] [[{{node loss_7/fidout_loss/ArithmeticOptimizer/FoldTransposeIntoMatMul_matmul}}]]

自定义损失函数中的y_true似乎是从model.fit中获取整个标签集(请参见下面的代码),而不是分批处理,因此这意味着它将(10,4,4)标签集转储到其中,但是尝试与(4,4)y_pred进行比较。不确定如何解决此问题。代码如下。最终,我需要能够以这种方式计算保真度-实际上最终是不保真度-以这种方式针对任意批次大小(我将model.fit中的batchsize设置为1只是为了使其正常工作)。建议表示赞赏。

def fidelity2(y_true, y_pred):
    y_truetp = tf.transpose(y_true)
    t1 = (y_truetp @ y_pred)
    tr = tf.trace(t1)
    mxdim = tf.cast(tf.shape(y_pred)[0], tf.float32)
    fidelity = (tf.abs((tr)** 2) / mxdim ** 2)
    return fidelity    



x = layers.Input(shape=(data.realdim,data.realdim), name='input1', batch_size=None)
x1 = layers.GRU(data.Uj_dim, return_sequences=True)(x)
x1 = layers.Dropout(rate=0.2)(x1)
x1 = layers.GRU(data.Uj_dim, return_sequences=True)(x1)
x1 = layers.Dropout(rate=0.2)(x1)
x1 = layers.GRU(data.Uj_dim, return_sequences=True)(x1)
x1 = layers.Flatten()(x1)
y = layers.Dense(160, activation='relu', name='output_data')(x1)
xreshape = layers.Reshape((4,4))(x)
y2 = productlayer(trainable = True, name="fidout")(x1)
model = tf.keras.models.Model(inputs=x, outputs=[y,y2])



batchsz = 1
#===============
model.compile(optimizer='adam', loss=['mse',fidelity2], metrics=['mse',fidelity2])
model.summary()

model.fit(data.lab, [inputreshape,data.lab], epochs=2, batch_size=batchsz, validation_split = 0, shuffle=False, steps_per_epoch=1)

#==============```

Model: "model_13"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input1 (InputLayer)             [(None, 4, 4)]       0                                            
__________________________________________________________________________________________________
gru_39 (GRU)                    (None, 4, 40)        5400        input1[0][0]                     
__________________________________________________________________________________________________
dropout_26 (Dropout)            (None, 4, 40)        0           gru_39[0][0]                     
__________________________________________________________________________________________________
gru_40 (GRU)                    (None, 4, 40)        9720        dropout_26[0][0]                 
__________________________________________________________________________________________________
dropout_27 (Dropout)            (None, 4, 40)        0           gru_40[0][0]                     
__________________________________________________________________________________________________
gru_41 (GRU)                    (None, 4, 40)        9720        dropout_27[0][0]                 
__________________________________________________________________________________________________
flatten_13 (Flatten)            (None, 160)          0           gru_41[0][0]                     
__________________________________________________________________________________________________
output_data (Dense)             (None, 160)          25760       flatten_13[0][0]                 
__________________________________________________________________________________________________
fidout (productlayer)           (4, 4)               0           flatten_13[0][0]                 
==================================================================================================
Total params: 50,600
Trainable params: 50,600
Non-trainable params: 0
__________________________________________________________________________________________________
Epoch 1/2
---------------------------------------------------------------------------
InvalidArgumentError...InvalidArgumentError: In[0] is not a matrix. Instead it has shape [10,4,4]
     [[{{node loss_7/fidout_loss/ArithmeticOptimizer/FoldTransposeIntoMatMul_matmul}}]]

0 个答案:

没有答案