Pycharm:所做的更改没有效果,如何解释这种行为?

时间:2019-06-15 11:56:56

标签: tensorflow keras pycharm

此问题仅在Pycharm中发生:

我基于TF2.0网站教程做了一个非常简单的NN。奇怪的是,当我更改batch_size时,它与旧的一样,好像我什么也没做。实际上,我所做的一切都是无关紧要的。

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') / 255


class Prototype(tf.keras.models.Model):
    def __init__(self, **kwargs):
        super(Prototype, self).__init__(**kwargs)
        self.l1 = layers.Dense(64, activation='relu', name='dense_1')
        self.l2 = layers.Dense(64, activation='relu', name='dense_2')
        self.l3 = layers.Dense(10, activation='softmax', name='predictions')
    def call(self, ip):
        x = self.l1(ip)
        x = self.l2(x)
        return self.l3(x)

model = Prototype()
model.build(input_shape=(None, 784,))
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
loss_fn = keras.losses.SparseCategoricalCrossentropy()
batch_size = 250
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(batch_size)
def train_one_epoch():
  for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
    print(x_batch_train.shape)
    with tf.GradientTape() as tape:
      logits = model(x_batch_train)  # Logits for this minibatch
      loss_value = loss_fn(y_batch_train, logits)
    grads = tape.gradient(loss_value, model.trainable_weights)
    optimizer.apply_gradients(zip(grads, model.trainable_weights))

我运行train_one_epoch(),它训练一个纪元。然后,当我再次运行train_one_epoch()时,我更改批处理大小,并因此更改数据集对象以提供新的块大小, BUT ,它继续使用旧的batch_size。

证明: enter image description here

0 个答案:

没有答案