我正在尝试使用桌面应用获取Google云端硬盘中的文件列表。代码如下:
def main(argv):
storage = Storage('drive.dat')
credentials = storage.get()
if credentials is None or credentials.invalid:
credentials = run(FLOW, storage)
# Create an httplib2.Http object to handle our HTTP requests and authorize it
# with our good Credentials.
http = httplib2.Http()
http = credentials.authorize(http)
service = build("drive", "v2", http=http)
retrieve_all_files(service)
然后在retrieve_all_files中,我打印文件:
param = {}
if page_token:
param['pageToken'] = page_token
files = service.files().list(**param).execute()
print files
但在我对我的帐户进行身份验证后,打印的文件列表中没有任何项目。有没有人有类似的问题或知道解决方案?
答案 0 :(得分:7)
如果我错了,请更正我但我相信您使用的是https://www.googleapis.com/auth/drive.file
范围,该范围仅返回您的应用已使用Google Drive UI创建或已使用您的应用明确打开的文件Picker API
。
要检索所有文件,您需要使用更广泛的范围:https://www.googleapis.com/auth/drive
。
要了解有关不同范围的更多信息,请查看documentation。
答案 1 :(得分:0)
首先,您需要遍历page_token以获取My Drive的所有内容以及任何子文件夹。还有一些其他的东西可能就像不提供查询等。试试这个:
WARN [DataStreamer for file /user/qzhao/data/sorted/WGC033800D_sorted.bam._COPYING_] hdfs.DFSClient (DFSOutputStream.java:run(628)) - DataStreamer Exception java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:127)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:644)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1526)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1328)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1281)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:526)
put: java.nio.channels.ClosedChannelException
at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1538)
at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:98)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:80)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:52)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:112)
at org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:395)
at org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:327)
at org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:303)
at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:243)
at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:228)
at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:306)
at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278)
at org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:223)
at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260)
at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244)
at org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:200)
at org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:259)
at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190)
at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)