为什么我的神经网络不学习? (python tensorflow CNN)

时间:2021-05-31 10:57:26

标签: python tensorflow

我正在尝试解决长度约为 200 万的 DNA 序列的二元分类问题。

我决定用一种热编码对输入的 DNA 序列进行编码。

我正在使用 tensorflow 和 keras (python)

我使用亚当优化器

optimizer = keras.optimizers.Adam(learning_rate=learningrate, name="Adam")

还有一个非常简单的架构:

ishape = (None,4)
model = keras.Sequential()
model.add(Conv1D(filternumber, ksize, activation='relu', input_shape=ishape))
model.add(GlobalAvgPool1D(data_format="channels_last"))
model.add(Dense(2, activation='sigmoid'))

这是学习周期:

epoch in range(epochsize):
print("Epoch number "+ str(epoch) + "_____________")
batchnumber = 0
batchavgloss=[]
for batch in batchlist:
    loss_value = tf.constant(0.)
    mini_batch_losses = []
    with tf.GradientTape() as tape:
        for seqref in batch:
            seqref = int(seqref)
            X_train, y_train = loadvalue(seqref) #caricamento elementi
            logits = model(X_train, training=True)
            loss_value = tf.reduce_mean(tf.nn.weighted_cross_entropy_with_logits(y_train, logits, class_weights))

            mini_batch_losses.append(loss_value)
        loss_avg = tf.reduce_mean(mini_batch_losses)
    print("batch " + str(batchnumber+1) + " losses:" + str(loss_avg.numpy()))
    batchavgloss.append(loss_avg.numpy())
    batchnumber += 1
    grads = tape.gradient(loss_avg, model.trainable_weights)
    optimizer.apply_gradients(grads_and_vars=zip(grads, model.trainable_weights))

epochavgloss= sum(batchavgloss)/len(batchavgloss)
if epochavgloss < bestepochloss:
    bestepochloss=epochavgloss
    model.save(savepath)

训练周期允许我一次传递一个序列,并且仅在批量大小的序列数量之后更新权重;这样我就可以传递不同的输入长度序列。

问题在于它只在第一个 epoch 中学习了一些东西,而不是停止学习。

我尝试了所有这些配置都没有结果:

Learning rate 0.1 Batch 2  ksize 3
Learnig rate 0.1 Batch 2 ksize 32
Learning rate 0.1 Batch 16 ksize 3 
Learning rate 0.1 Batch 16 ksize 32 

Learning rate 0.01 Batch 2  ksize 3
Learnig rate 0.01 Batch 2 ksize 32
Learning rate 0.01 Batch 16 ksize 3 
Learning rate 0.01 Batch 16 ksize 32 

这是一个时期内损失函数值的例子: “0.8655851910114288 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438 0.854682110786438”

有人可以帮我吗?

0 个答案:

没有答案