在Colaboratory的Python3中,我为GPU启用了Runtime-Change运行时类型
然后我编写了这段代码:
import pandas as pd
import numpy as np
# Code to read csv file into Colaboratory:
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
#Link of a 10GB file in Google Drive
link = ''
fluff, id = link.split('=')
print (id) # Verify that you have everything after '='
downloaded = drive.CreateFile({'id':id})
downloaded.GetContentFile('empresa.csv')
但是由于内存不足,我无法打开文件:使用所有可用RAM后,您的会话崩溃了
我有:
已连接到“ Python 3 Google Compute Engine后端(GPU)” 内存:0.64 GB / 12.72 GB磁盘:25.14 GB / 358.27 GB
请,有什么办法可以增加协作实验室的内存吗?
免费或付费
-/-
我尝试了另一种方法,将驱动器安装为文件系统
from google.colab import drive
drive.mount('/content/gdrive')
with open('/content/gdrive/My Drive/foo.txt', 'w') as f:
f.write('Hello Google Drive!')
!cat /content/gdrive/My\ Drive/foo.txt
# Drive REST API
from google.colab import auth
auth.authenticate_user()
# Construct a Drive API client
from googleapiclient.discovery import build
drive_service = build('drive', 'v3')
# Downloading data from a Drive file into Python
file_id = ''
import io
from googleapiclient.http import MediaIoBaseDownload
request = drive_service.files().get_media(fileId=file_id)
downloaded = io.BytesIO()
downloader = MediaIoBaseDownload(downloaded, request)
done = False
while done is False:
# _ is a placeholder for a progress object that we ignore.
# (Our file is small, so we skip reporting progress.)
_, done = downloader.next_chunk()
downloaded.seek(0)
print('Downloaded file contents are: {}'.format(downloaded.read()))
但是问题仍然存在:使用所有可用的RAM后,您的会话崩溃了
答案 0 :(得分:1)
可以始终连接到local backend
答案 1 :(得分:1)
我的建议是to mount your Drive as a filesystem,而不是尝试将文件完全加载到内存中。
然后,您可以一次一次地从文件系统中逐个增量地读取CSV。