inp = Input([10], name='Input')
X = Dense(10, activation='relu', kernel_initializer='glorot_uniform')(inp)
X = Dropout(0.5, seed=0)(X)
X = Dense(1, activation='relu', kernel_initializer='glorot_uniform')(X)
X = Dropout(0.5, seed=0)(X)
m = Model(inputs=inp, outputs=X)
u = np.random.rand(1,10)
sess.run(tf.global_variables_initializer())
K.set_learning_phase(0)
print(sess.run(X, {inp: u}))
print(sess.run(X, {inp: u}))
K.set_learning_phase(1)
print(sess.run(X, {inp: u}))
print(sess.run(X, {inp: u}))
print(m.predict(u))
这是我的代码。 当我运行模型时,每次运行都得到相同的结果。但是,由于辍学层,运行模型时结果是否会略有变化?