在可恢复=正确且分块(无论分块大小)的任何文件上传上,我的最大速度约为20Mbps,但是当不使用此方法时,我在同一文件上上传的速度为180Mbps。问题是,如果文件太大而无法容纳在内存中(甚至无法执行许多上传操作),则计算机将耗尽内存,而我将不得不使用分块上传,这太慢了。
两个示例之间的共享代码:
import os
from google.oauth2 import service_account
from googleapiclient.discovery import build, MediaFileUpload
def create_drive_service(user_email,SERVICE_ACCOUNT_JSON_FILE,SCOPES=None):
if SCOPES is None: SCOPES = ['https://www.googleapis.com/auth/drive']
credentials = service_account.Credentials.from_service_account_file(SERVICE_ACCOUNT_JSON_FILE)
credentials = credentials.with_scopes(SCOPES)
credentials = credentials.with_subject(user_email)
return build('drive', 'v3', credentials=credentials)
#
# Fill in the following
#
jsonpath = 'c:\\path\\to\\service_account.json'
service = create_drive_service('user@domain',jsonpath)
filename = 'filename.ext'
filepath = 'c:\\path\\to\\folder\\' + filename
parent_id = ''
代码缓慢:
file_metadata = {"name":filename,"parents":[parent_id]}
media = MediaFileUpload(filepath,chunksize=-1,resumable=True)
request = service.files().create(body=file_metadata,media_body=media,fields='id')
while response is None:
status, response = request.next_chunk()
file = request.execute()
可以全速运行的代码,但如果文件太大而无法容纳在内存中则无法运行:
file_metadata = {"name":filename,"parents":[parent_id]}
media = MediaFileUpload(filepath,resumable=False)
request = service.files().create(body=file_metadata,media_body=media,fields='id').execute()
如何上传大文件而不必担心内存不足而又不限于20Mbps?