在Python中将TensorFlow模型(集合)保存到一个泡菜文件中,而不是多个泡菜中

时间:2019-09-03 10:33:26

标签: python pandas tensorflow keras pickle

我正在寻求针对回归问题部署50个模型集合,每个模型都是Keras.Sequential神经网络。

下面是我的代码的一个版本(简化为3个模型),可以正常运行。

但是,我不想为每个单独的模型创建一个pickle文件,所以有一种方法可以创建一个包含所有模型列表的类,导致我只需要保存/加载一个pickle文件?

from __future__ import absolute_import, division, print_function
import tensorflow as tf
from tensorflow import keras
import pandas as pd
import numpy as np

train = pd.read_csv("Training Data.csv").fillna(0)
X_train = train.drop(['ID_NUMBER','DATE','X','Y','Z'],axis=1)
Y_train = train[['X','Y','Z']]

EPOCHS = 1500
BATCH_SIZE = 256

#Defining the 3 layered Neural Network
def build_model():
    model = keras.Sequential([
    keras.layers.Dense(1000, activation=tf.nn.softplus,
                       input_shape=(X_train.shape[1],)),
    keras.layers.Dense(500, activation=tf.nn.softplus),
    keras.layers.Dense(3)
    ])

    model.compile(loss='mse',optimizer='adam', metrics=['mse'])
    return model

model0 = build_model()
# Store training stats
history0 = model0.fit(X_train, Y_train, epochs=EPOCHS, batch_size=BATCH_SIZE,
                validation_split=0.0, verbose=1)

model1 = build_model()
# Store training stats
history1 = model1.fit(X_train, Y_train, epochs=EPOCHS, batch_size=BATCH_SIZE,
                validation_split=0.0, verbose=1)

model2 = build_model()
# Store training stats
history2 = model2.fit(X_train, Y_train, epochs=EPOCHS, batch_size=BATCH_SIZE,
                validation_split=0.0, verbose=1)

model0.save("model0.pkl")
model1.save("model1.pkl")
model2.save("model2.pkl")

为了做出新的预测,我的代码应如下所示:

#Loading Models
model0 = tf.keras.models.load_model("model0.pkl")
model1 = tf.keras.models.load_model("model1.pkl")
model2 = tf.keras.models.load_model("model2.pkl")

#Finding Weights (based on train score)

train_nn_predictions = model0.predict(X_train)
train['X'],train['Y'],train['Z'] = train_nn_predictions[:,0],train_nn_predictions[:,1],train_nn_predictions[:,2]     
nn0 = #training score metric (irrelevant to show how it is calculated here)
print("Average Train Score for Model 0 is:",nn0)

train_nn_predictions = model1.predict(X_train)
train['X'],train['Y'],train['Z'] = train_nn_predictions[:,0],train_nn_predictions[:,1],train_nn_predictions[:,2]     
nn1 = #training score metric (irrelevant to show how it is calculated here)
print("Average Train Score for Model 1 is:",nn1)

train_nn_predictions = model2.predict(X_train)
train['X'],train['Y'],train['Z'] = train_nn_predictions[:,0],train_nn_predictions[:,1],train_nn_predictions[:,2]     
nn2 = #training score metric (irrelevant to show how it is calculated here)
print("Average Train Score for Model 2 is:",nn2)

#Apply the weightings for each of the models
w0,w1,w2 = 1/nn0,1/nn1,1/nn2

#New Predictions
new_record = np.array([my variables])
target_predictions = (w0*model0.predict(new_record)+w1*model1.predict(new_record)+w2*model2.predict(new_record))/(w0+w1+w2)

1 个答案:

答案 0 :(得分:0)

您可以尝试:

  1. 使用 layers.concatenate 合并所有模型。它将为所有50个模型创建输出。有关代码检查的更多详细信息: 根据{{​​3}}:

x = np.arange(20).reshape(2,2,5)

y = np.arange(20,30).reshape(2,1,5)

tf.keras.layers.Concatenate(axis = 1)([x,y])

  1. 使用 KerasPickleWrapper 对其进行腌制。