如何从谷歌桶中加载Tensorflow冻结图模型?

时间:2019-10-17 13:59:36

标签: tensorflow google-cloud-functions google-cloud-storage

在本地,当我们想使用TensorFlow加载模型时,我们这样做:

path_to _frozen = model_path + '/frozen_inference_graph.pb'
detection_graph = tf.Graph()
with detection_graph.as_default():
    od_graph_def = tf.GraphDef()
    with tf.io.gfile.GFile(path_to _frozen, 'rb') as fid:
        serialized_graph = fid.read()
        od_graph_def.ParseFromString(serialized_graph)
        tf.import_graph_def(od_graph_def, name='')

我们如何使用Google云功能将存储的模型加载到Google存储桶上?

2 个答案:

答案 0 :(得分:1)

您可以将pb文件存储在存储中。

然后,在您的函数中,将其下载到local writable directory /tmp中。请记住,该目录位于“内存中”。这意味着必须正确定义分配给您的函数的内存,以处理应用程序的内存占用以及模型下载的文件

用类似这样的内容替换第一行。

# Be sure that your function service account as access to the storage bucket    
storage_client = storage.Client()
bucket = storage_client.get_bucket('<bucket_name>')
blob = bucket.blob('<path/to>/frozen_inference_graph.pb')

# Download locally your pb file
path_to_frozen = '/tmp/frozen_inference_graph.pb'
blob.download_to_filename(path_to_frozen)

答案 1 :(得分:1)

constructor(
private dataService: DataService
){ }


ngOnInit() {
    this.dataService.getProducts().subscribe(
      data => {
       console.log(data)
      },
      error => {
      console.log(error)
      }
    );
  }

def处理程序(请求):     download_blob(“ BUCKET_NAME”,“ redbull / output_inference_graph.pb / frozen_inference_graph.pb”,“ / tmp / frozen_inference_graph.pb”)     打印(“好”)     detection_graph = tf.Graph()     使用detection_graph.as_default():         od_graph_def = tf.GraphDef()         使用tf.io.gfile.GFile('/ tmp / frozen_inference_graph.pb','rb')作为fid:             serialized_graph = fid.read()             od_graph_def.ParseFromString(serialized_graph)             tf.import_graph_def(od_graph_def,name ='')