我需要根据在Cloud ml引擎中部署的模型进行在线预测。我在python中的代码类似于在文档(https://cloud.google.com/ml-engine/docs/tensorflow/online-predict)中找到的代码:
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}'.format(project, model)
if version is not None:
name += '/versions/{}'.format(version)
response = service.projects().predict(
name=name,
body={'instances': instances}
).execute()
但是,我从脚本外部接收到“实例”数据,我想知道是否有一种方法可以运行该脚本而无需创建“ service = googleapiclient.discovery.build('ml','v1')”每次请求之前,因为这需要时间。 pd:这是我关于gcp的第一个项目。谢谢。
答案 0 :(得分:0)
类似的事情会起作用。您需要全局初始化服务,然后使用该服务实例进行调用。
import googleapiclient.discovery
AI_SERVICE = None
def ai_platform_init():
global AI_SERVICE
# Set GCP Authentication
credentials = os.environ.get('GOOGLE_APPLICATION_CREDENTIALS')
# Path to your credentials
credentials_path = os.path.join(os.path.dirname(__file__), 'ai-platform-credentials.json')
if credentials is None and os.path.exists(credentials_path):
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = credentials_path
# Create AI Platform Service
if os.path.exists(credentials_path):
AI_SERVICE = googleapiclient.discovery.build('ml', 'v1', cache=MemoryCache())
# Initialize AI Platform on load.
ai_platform_init()
然后,您可以执行以下操作:
def call_ai_platform():
response = AI_SERVICE.projects().predict(name=name,
body={'instances': instances}).execute()
奖金!如果您对googleapiclient.discovery
调用中的MemoryCache类感到好奇,那是从另一个SO借来的:
class MemoryCache():
"""A workaround for cache warnings from Google.
Check out: https://github.com/googleapis/google-api-python-client/issues/325#issuecomment-274349841
"""
_CACHE = {}
def get(self, url):
return MemoryCache._CACHE.get(url)
def set(self, url, content):
MemoryCache._CACHE[url] = content