在keras的学生 - 老师模型

时间:2017-04-12 11:26:51

标签: tensorflow deep-learning keras

我将以下网址中的学生 - 教师模型转换为keras one。

https://github.com/chengshengchan/model_compression/blob/master/teacher-student.py

如何向两个模特(学生,老师)提供输入,只从keras的学生那里得到一个输出? 我将老师的所有张力设置为trainable = false,并将损失函数设置为学生和教师输出之间的差异,如下所示:

tf_loss = tf.nn.l2_loss(teacher - student)/batch_size

据我所知,在定义model.fit时,可以只为一个模型提供输入。但在这种情况下,我应该同时兼顾教师和学生的模式。

提前感谢!

2 个答案:

答案 0 :(得分:4)

以下是keras中非常简单的学生 - 教师模型。 我希望它对像我这样的人有所帮助。 干得好!

import keras
from keras.datasets import mnist
from keras.layers import Input, Embedding, LSTM, Dense, Lambda
from keras.models import Model
import numpy as np
from keras.utils import np_utils
from keras.layers.core import Dense, Dropout, Activation

nb_classes = 10

(X_train, y_train), (X_test, y_test) = mnist.load_data()

X_train = X_train.reshape(60000, 784)
X_test = X_test.reshape(10000, 784)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)



from keras.models import Sequential
from keras.layers import Dense, Merge
from keras.optimizers import SGD, Adam, RMSprop

batch_size = 128
nb_classes = 10
nb_epoch = 3

teacher = Sequential()
teacher.add(Dense(10, input_shape=(784,)))
teacher.add(Dense(10))
teacher.add(Activation('softmax'))

teacher.summary()
teacher.compile(loss='categorical_crossentropy',
              optimizer=RMSprop(),
              metrics=['accuracy'])

history = teacher.fit(X_train, Y_train,
                    batch_size=batch_size, nb_epoch=nb_epoch,
                    verbose=1, validation_data=(X_test, Y_test))
score = teacher.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score[0])
print('Test accuracy:', score[1])

for i in range(len(teacher.layers)):
    setattr(teacher.layers[i], 'trainable', False)



Y_train = np.zeros((60000, 10))



student = Sequential()
student.add(Dense(10, input_dim=784))
student.add(Activation('softmax'))
student.compile(loss='mean_squared_error', optimizer='Adam', metrics=['accuracy'])

from keras.layers import *
def negativeActivation(x):
    return -x

negativeRight = Activation(negativeActivation)(student.output) 
diff = Add()([teacher.output,negativeRight])


model = Model(inputs=[teacher.input, student.input], outputs=[diff])
model.compile(loss='mean_squared_error', optimizer='Adam', metrics=['acc'])

model.summary(line_length=150)
model.fit([X_train, X_train], [Y_train], batch_size=128, nb_epoch=5)


print student.evaluate(X_test, Y_test)

答案 1 :(得分:1)

我在Keras中看到的唯一实现涉及构建2个单独的函数,这些函数可以扩展或加深教师模型中的权重层作为学生模型的初始权重。 我不确定是否正是Hinton等人。 (2015年)说实话,但是是师生。 https://github.com/fchollet/keras/issues/3491