损失函数返回“无”数组(离散损失函数)

时间:2019-07-05 14:55:14

标签: python tensorflow loss-function tensorflow2.0

我正在根据https://www.tensorflow.org/beta/tutorials/eager/custom_training_walkthrough中的示例尝试宽松地使用tf.GradientTape,并且需要创建自定义损失函数,其中每个预测都将根据结果获得加权损失值。

这是一个三类分类问题,损失函数采用特征“ x”(130),标签“ y”(0、1或2)和“权重”(每个标签一个权重)这取决于预测是否与标签匹配。这是我的代码:

def TF_learning(training_data,training_results,testing_data):


    odds = [i[-2:] for i in training_data]
    training_data = tf.keras.utils.normalize(training_data, axis=1)
    testing_data = tf.keras.utils.normalize(testing_data, axis=1)
    minutes = int((len(training_data[0]) - 10) / 2)
    dense_layers = 1
    neurons = 32
    epochs = 70

    NAME = "{}-nodes-{}-dense".format(neurons, dense_layers)
    tensorboard = TensorBoard(log_dir='logs/{}'.format(NAME))
    #print(NAME)

    model = tf.keras.models.Sequential()
    model.add(tf.keras.layers.Flatten())

    for i_layer in range(0,dense_layers):
        #model.add(tf.keras.layers.batch_normalization(training_data))
        model.add(tf.keras.layers.Dense(neurons, activation=tf.nn.relu))
        model.add(tf.keras.layers.Dropout(0.2))
        model.add(tf.keras.layers.Dense(neurons/2., activation=tf.nn.relu))
        model.add(tf.keras.layers.Dropout(0.1))

    model.add(tf.keras.layers.Dense(3, activation=tf.nn.softmax))

    @tf.function
    def loss(model, x, y, weights):

        x = model(x)
        x_range = tf.range(x.shape.as_list()[-1], dtype=x.dtype)

        y_ = tf.reduce_sum(tf.nn.softmax(x*1e10) * x_range, axis=-1)
        y_ = tf.cast(y_, dtype=tf.int32)
        y_ = tf.one_hot(y_, depth=3)

        y = tf.cast(y, tf.int64)
        y = tf.one_hot(y, depth=3)

        correct = tf.multiply(y_, y)

        wrong = tf.add(tf.multiply(y[:,0], y_[:,2]), tf.multiply(y[:,2], y_[:,0]))

        indices = tf.cast(tf.stack([tf.range(tf.shape(weights)[0], dtype=tf.int32), tf.ones(tf.shape(weights)[0], dtype=tf.int32)], axis=1), dtype=tf.int32)
        scatter = tf.tensor_scatter_nd_update(correct, indices, wrong)
        scatter = tf.cast(scatter, dtype=tf.float64)
        loss_array = tf.multiply(scatter, weights)
        loss = tf.reduce_sum(loss_array)

        return loss


    @tf.function
    def grad(model, inputs, targets, weights):

        with tf.GradientTape(persistent=True, watch_accessed_variables=False) as tape:
            loss_value = loss(model, training_data, training_results, weights)
            print(tape.gradient(loss_value, model.trainable_variables))
        return loss_value, tape.gradient(loss_value, model.trainable_variables) # Virker ikke, model.variables er tom


    weights = - tf.Variable(np.insert(odds, 1, values=0, axis=1), dtype=tf.float64) + 1

    l = loss(model, training_data, training_results, weights)
    print("Loss test: {}".format(l))

    optimizer = tf.keras.optimizers.Adam(lr=0.1, decay=1e-5)

    loss_value, grads = grad(model, training_data, training_results, weights)

    print("Step: {}, Initial Loss: {}".format(optimizer.iterations.numpy(),
                                          loss_value.numpy()))

    optimizer.apply_gradients(zip(grads, model.trainable_variables))

    print("Step: {},         Loss: {}".format(optimizer.iterations.numpy(),
                                          loss(model, training_data, training_results).numpy()))

我如何在Tensorflow中做类似的事情?我只需要根据预测是否正确对损失进行加权即可。 我猜不能计算出梯度,因为当它迈出一小步时,数字仍然会转换为相同的整数。我收到以下错误。

Loss test: 7.040000000000001 
WARNING: Logging before flag parsing goes to stderr. 
W0711 18:04:30.068719 9868 backprop.py:935] Calling GradientTape.gradient on a persistent tape inside it's context is significantly less efficient than calling it outside the context (it causes the gradient ops to be recorded on the tape, leading to increased CPU and memory usage). Only call GradientTape.gradient inside the context if you actually want to trace the gradient in order to compute higher order derrivatives. 
[None, None, None, None, None, None] 
Step: 0, Initial Loss: 7.040000000000001 
Traceback (most recent call last): 
File "ML_test.py", line 322, in <module> 
predictions = TF_learning(training_data=X_train,training_results=Y_train,testing_data=X_test) 
File "C:\Code\ATP\Ad_hoc_opgaver\Test\ML_tests\machine_learning_tf2.py", line 157, in TF_learning 
optimizer.apply_gradients(zip(grads, model.trainable_variables)) 
File "C:\Code\lib\site-packages\tensorflow\python\keras\optimizer_v2\optimizer_v2.py", line 396, in apply_gradients 
grads_and_vars = _filter_grads(grads_and_vars) 
File "C:\Code\lib\site-packages\tensorflow\python\keras\optimizer_v2\optimizer_v2.py", line 924, in _filter_grads 
([v.name for _, v in grads_and_vars],)) 
ValueError: No gradients provided for any variable: ['sequential/dense/kernel:0', 'sequential/dense/bias:0', 'sequential/dense_1/kernel:0', 'sequential/dense_1/bias:0', 'sequential/dense_2/kernel:0', 'sequential/dense_2/bias:0'].

有什么办法可以使这项工作吗?也许使用的优化器不使用像样的梯度,而是使用随机采样?还是采取了足够大的步骤来获得渐变?

0 个答案:

没有答案