如何在S3中保存泡菜文件

时间:2019-06-14 11:34:43

标签: amazon-s3 scikit-learn sklearn-pandas

目前,我正在使用xgboost模型。之后,我要将模型(泡菜格式)保存到S3中以备将来使用。泡菜文件已正确保存在我的本地jupyter集线器中,但未保存在S3中。

我的代码如下-

train, test = np.split(df.sample(frac=1), [int(.8*len(df))])

X_train, y_train = train[features], train[label]
X_test, y_test = test[features], test[label]

xg_reg = xgb.XGBRegressor(objective ='reg:linear', colsample_bytree = 0.3, learning_rate = 0.1,
                max_depth = 5, alpha = 10, n_estimators = 10)

xg_reg.fit(X_train,y_train)

# save model in local jupyter hub
#pkl.dump(xg_reg, open("xgb_model_DS_regression.pkl", "wb"))

session = boto3.Session(
   aws_access_key_id='XXXXX',
   aws_secret_access_key='VVVV'
)

import io
pickle_buffer = io.BytesIO()
s3_resource = boto3.resource('s3')

new_df.to_pickle(xg_reg)
s3_resource.Object('mn-ml_model', 'data/ML_Models/my_model/').put(Body=pickle_buffer.getvalue())

任何建议?

0 个答案:

没有答案