部署ML模型的AWS

时间:2020-07-09 14:36:23

标签: python amazon-web-services amazon-s3

我已经在本地实现了一个ML模型,该模型需要在S3上部署,然后创建Lambda才能调用它。

问题是我面临大量错误。我已经尝试阅读文档并遵循一些笔记本,但是我不知道如何使我的模型生效。

代码如下:

from sagemaker import get_execution_role
import sagemaker
import argparse
import numpy as np
import os
import pandas as pd
from sklearn.externals import joblib
pd.options.mode.chained_assignment = None
import datetime as dt
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
import io
from sagemaker.sklearn.estimator import SKLearn
import s3fs


prefix = 'FP'
sagemaker_session = sagemaker.Session()
role = get_execution_role()

data = pd.read_csv("df.csv", header = 0, usecols = ["col1", "col2"])


os.makedirs('./data_DM', exist_ok=True)
data.to_csv('./data_DM/orders.csv')

WORK_DIRECTORY = 'data_DM'

train_input = sagemaker_session.upload_data(WORK_DIRECTORY, key_prefix="{}/{}".format(prefix, WORK_DIRECTORY) )


script_path = './data_DM/My_script.py'

sklearn = SKLearn(
    entry_point=script_path,
    train_instance_type="ml.m5.2xlarge",
    role=role,
    sagemaker_session=sagemaker_session)

sklearn.fit({'train': train_input})

这里是My_script.py:

import argparse
import numpy as np
import os
import pandas as pd
from sklearn.externals import joblib
pd.options.mode.chained_assignment = None
import datetime as dt
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
import io
from sklearn import tree
import boto3, re, sys, math, json, urllib.request

def cleaning(data):
     lots of cleaning
     return cleaned data


if __name__ =='__main__':
    
    bucket_name = 'ciao'
    file_name = 'df.csv 


    data_location = 's3://{}/{}'.format(bucket_name, file_name)

    parser = argparse.ArgumentParser()

    parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])
    parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR'])
    parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN'])

    args = parser.parse_args()

    data = pd.read_csv(data_location, header = 0, usecols = ["col1", "col2"])

    data_ml = cleaning(data) 

    y = data_ml.loc[:,"event"]
    X = data_ml.loc[:, data_ml.columns != 'event']

    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
    
    

    model =  tree.DecisionTreeClassifier(n_estimators=600, class_weight = "balanced", random_state=42)
    model.fit(X_train, y_train)

    #Save the model to the location specified by args.model_dir
    joblib.dump(model, os.path.join(args.model_dir, "model.joblib"))



def model_fn(model_dir):
    model = joblib.load(os.path.join(model_dir, "model.joblib"))
    return model


def input_fn(request_body, request_content_type):
    if request_content_type == 'text/csv':
        samples = []
        for r in request_body.split('|'):
            samples.append(list(map(float,r.split(','))))
        return np.array(samples)
    else:
        raise ValueError("Thie model only supports text/csv input")

def predict_fn(input_data, model):
    return model.predict_proba(cleaning(input_data))

def output_fn(prediction, content_type):
    return ' | '.join([INDEX_TO_LABEL[t] for t in prediction])

现在错误如下:

/miniconda3/lib/python3.7/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
Traceback (most recent call last):
File "/miniconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/miniconda3/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/ml/code/Failure_Pred.py", line 206, in
"weight", "userPrice", "amount", "nParcel"])
File "/miniconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 685, in parser_f
return _read(filepath_or_buffer, kwds)
File "/miniconda3/lib/python3.7/site-packages/pandas/io/parsers.py", line 440, in _read
filepath_or_buffer, encoding, compression
File "/miniconda3/lib/python3.7/site-packages/pandas/io/common.py", line 206, in get_filepath_or_buffer
from pandas.io import s3
File "/miniconda3/lib/python3.7/site-packages/pandas/io/s3.py", line 10, in
"s3fs", extra="The s3fs package is required to handle s3 files."
File "/miniconda3/lib/python3.7/site-packages/pandas/compat/_optional.py", line 93, in import_optional_dependency
raise ImportError(message.format(name=name, extra=extra)) from None
ImportError: Missing optional dependency 's3fs'. The s3fs package is required to handle s3 files. Use pip or conda to install s3fs.
2020-07-09 12:13:27,645 sagemaker-containers ERROR ExecuteUserScriptError:
Command "/miniconda3/bin/python -m Failure_Pred"

2020-07-09 12:13:36 Uploading - Uploading generated training model
2020-07-09 12:13:36 Failed - Training job failed

Error for Training job sagemaker-scikit-learn-2020-07-09-12-10-17-446: Failed. Reason: AlgorithmError: ExecuteUserScriptError:
Command "/miniconda3/bin/python -m Failure_Pred"

似乎我尚未安装s3fs,但我已经通过pip install和conda install进行了安装。

我该如何解决?

谢谢!

1 个答案:

答案 0 :(得分:1)

编辑07/10:在本地阅读路径中添加培训密钥: 更换 opt/ml/input/data/orders.csv的{​​{1}}



您遇到错误,因为您的opt/ml/input/data/train/orders.csv试图从S3中读取。尝试替换为data = pd.read_csv(data_location, ...)

如果您使用SageMaker,您无需在训练脚本中读取S3:SageMaker会帮您从S3复制到EC2

相反,如documentation所示,您只需要从 local 路径data = pd.read_csv('opt/ml/input/data/orders.csv', ...)中读取数据,其中opt/ml/input/data/<channel name>是用于命名的键您在培训电话<channel name>中的输入。请注意,本地是指本地临时SageMaker Training EC2实例,而不是您可能用于开发和编排的SageMaker Notebook EC2实例。

与工件复制到s3的情况相同:您无需自己进行操作。只需将工件写入本地路径model.fit({'<channel name>': 's3://my data'})中,该服务会将其复制回S3。一些AWS提供的容器(例如sklearn容器)还在环境变量(opt/ml/modelSM_CHANNEL_<channel name>)中提供了输入数据路径和伪影路径,您可以选择使用它们以避免在代码中对其进行硬编码。您可以从this random forest demo获得启发,并使其适应您的情况。您不需要s3fs。