我目前正在尝试在azure上部署模型并将其端点公开给我的应用程序,但我一直遇到错误
部署代码
model = run.register_model(model_name='pytorch-modeloldage', model_path="outputs/model") print("Starting.........")
inference_config = InferenceConfig(runtime= "python",
entry_script="pytorchscore.py",
conda_file="myenv.yml")
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,auth_enabled=True,
memory_gb=1,
tags={'name':'oldageml', 'framework': 'pytorch'},
description='oldageml training')
service = Model.deploy(workspace=ws,
name='pytorch-olageml-run',
models=[model],
inference_config=inference_config,
overwrite=True,
deployment_config=aciconfig)
service.wait_for_deployment(True)
# print(service.get_logs()) print("bruh did you run", service.scoring_uri) print(service.state)
错误
ERROR - Service deployment polling reached non-successful terminal state, current service state: Transitioning
More information can be found here:
Error:
{
"code": "EnvironmentBuildFailed",
"statusCode": 400,
"message": "Failed Building the Environment."
}
答案 0 :(得分:2)
我也有此错误,我确信几天前它已经在工作! 无论如何,我意识到我在环境定义中使用的是python 3.5。 我将其更改为3.6,它可以工作!我注意到2019年12月9日发布了新版本的azureml代码。
这是我更改环境的代码;我为您添加了变量而不是文件的环境,所以有点不同。
myenv=Environment(name="env-keras")
conda_packages = ['numpy']
pip_packages = ['tensorflow==2.0.0', 'keras==2.3.1', 'azureml-sdk','azureml-defaults']
mycondaenv = CondaDependencies.create(conda_packages=conda_packages, pip_packages=pip_packages, python_version='3.6.2')
myenv.python.conda_dependencies=mycondaenv
myenv.register(workspace=ws)
inference_config = InferenceConfig(entry_script='score.py',source_directory='.',environment=myenv)