删除Amazon S3存储桶?

时间:2008-08-26 02:12:07

标签: amazon-s3 buckets

我一直在通过S3Fox与Amazon S3进行交互,我似乎无法删除我的存储桶。我选择一个桶,点击删除,在弹出窗口中确认删除,然后......没有任何反应。还有其他工具我应该使用吗?

23 个答案:

答案 0 :(得分:143)

最终可以使用新的生命周期(到期)规则功能一次性删除所有文件。您甚至可以从AWS控制台执行此操作。

只需右键单击AWS控制台中的存储桶名称,选择“属性”,然后在页面底部的选项卡行中选择“生命周期”和“添加规则”。创建生命周期规则,将“前缀”字段设置为空白(空白表示存储桶中的所有文件,或者您可以将其设置为“a”以删除名称以“a”开头的所有文件)。将“天”字段设置为“1”。而已。完成。假设文件超过一天,它们都应该被删除,那么你可以删除该桶。

我只是第一次尝试这个,所以我还在等着看文件被删除的速度有多快(这不是即时的,但大概应该在24小时内发生)以及我是否需要支付一个删除命令或5000万删除命令......手指交叉!

答案 1 :(得分:30)

记住S3 Buckets在被删除之前需要为空。好消息是大多数第三方工具都会自动执行此过程。如果您遇到S3Fox问题,我建议您尝试使用S3FM for GUI或S3Sync作为命令行。亚马逊有一篇很好的文章描述how to use S3Sync。设置变量后,键命令为

./s3cmd.rb deleteall <your bucket name>

删除包含大量单个文件的存储桶会导致许多S3工具崩溃,因为它们会尝试显示目录​​中所有文件的列表。您需要找到批量删除的方法。我为此找到的最好的GUI工具是Bucket Explorer。它以1000个文件块的形式删除S3存储桶中的文件,并且在尝试打开s3Fox和S3FM等大型存储桶时不会崩溃。

我还发现了一些可用于此目的的脚本。我还没有尝试过这些脚本,但它们看起来非常简单。

<强> RUBY

require 'aws/s3'

AWS::S3::Base.establish_connection!(
:access_key_id => 'your access key',
:secret_access_key => 'your secret key'
)

bucket = AWS::S3::Bucket.find('the bucket name')

while(!bucket.empty?)
begin
puts "Deleting objects in bucket"

bucket.objects.each do |object|
object.delete
puts "There are #{bucket.objects.size} objects left in the bucket"
end

puts "Done deleting objects"

rescue SocketError
puts "Had socket error"
end

end

<强> PERL

#!/usr/bin/perl
use Net::Amazon::S3;
my $aws_access_key_id = 'your access key';
my $aws_secret_access_key = 'your secret access key';
my $increment = 50; # 50 at a time
my $bucket_name = 'bucket_name';

my $s3 = Net::Amazon::S3->new({aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, retry => 1, });
my $bucket = $s3->bucket($bucket_name);

print "Incrementally deleting the contents of $bucket_name\n";

my $deleted = 1;
my $total_deleted = 0;
while ($deleted > 0) {
print "Loading up to $increment keys...\n";
$response = $bucket->list({'max-keys' => $increment, }) or die $s3->err . ": " . $s3->errstr . "\n";
$deleted = scalar(@{ $response->{keys} }) ;
$total_deleted += $deleted;
print "Deleting $deleted keys($total_deleted total)...\n";
foreach my $key ( @{ $response->{keys} } ) {
my $key_name = $key->{key};
$bucket->delete_key($key->{key}) or die $s3->err . ": " . $s3->errstr . "\n";
}
}
print "Deleting bucket...\n";
$bucket->delete_bucket or die $s3->err . ": " . $s3->errstr;
print "Done.\n";

消息来源:Tarkblog

希望这有帮助!

答案 2 :(得分:16)

最近版本的s3cmd有--recursive

例如,

~/$ s3cmd rb --recursive s3://bucketwithfiles

http://s3tools.org/kb/item5.htm

答案 3 :(得分:7)

使用s3cmd: 创建一个新的空目录 s3cmd sync --delete-removed empty_directory s3:// yourbucket

答案 4 :(得分:5)

这可能是S3Fox中的一个错误,因为它通常能够递归删除项目。但是,我不确定我是否曾尝试立即删除整个存储桶及其内容。

Stu提到的JetS3t项目包括一个Java GUI小程序,您可以在浏览器中轻松运行以管理S3存储桶:Cockpit。与S3Fox相比,它有优点和缺点,但很有可能它可以帮助您处理麻烦的存储桶。虽然它会要求您先删除对象,然后删除存储桶。

免责声明:我是JetS3t和Cockpit的作者

答案 5 :(得分:5)

SpaceBlock还可以简单地删除s3存储桶 - 右键单击​​存储桶,删除,等待作业在传输视图中完成,完成。

这是我维护的免费开源windows s3前端,所以无耻的插件警报等。

答案 6 :(得分:4)

如果您安装了ruby(和rubygems),请安装aws-s3 gem with

gem install aws-s3

sudo gem install aws-s3

创建文件delete_bucket.rb

require "rubygems" # optional
require "aws/s3"
AWS::S3::Base.establish_connection!(
  :access_key_id     => 'access_key_id',
  :secret_access_key => 'secret_access_key')
AWS::S3::Bucket.delete("bucket_name", :force => true)

并运行它:

ruby delete_bucket.rb

由于Bucket#delete为我返回了超时异常,我扩展了脚本:

require "rubygems" # optional
require "aws/s3"
AWS::S3::Base.establish_connection!(
  :access_key_id     => 'access_key_id',
  :secret_access_key => 'secret_access_key')
while AWS::S3::Bucket.find("bucket_name")
  begin
    AWS::S3::Bucket.delete("bucket_name", :force => true)
  rescue
  end
end

答案 7 :(得分:4)

我已经实现了 bucket-destroy ,这是一个多线程实用程序,可以执行删除存储桶所需的一切。我处理非空桶,以及支持版本的桶密钥。

您可以在此处阅读博客文章http://bytecoded.blogspot.com/2011/01/recursive-delete-utility-for-version.html以及此处的说明http://code.google.com/p/bucket-destroy/

我已成功删除了一个包含密钥名称,版本密钥和DeleteMarker密钥的双重“//”的存储桶。目前我在一个包含~40,000,000的桶上运行它到目前为止我已经能够在m1.large上的几个小时内删除1,200,000。请注意,该实用程序是多线程的,但尚未实现混乱(这将进行水平扩展,在多台计算机上启动该实用程序)。

答案 8 :(得分:4)

如果您使用亚马逊的控制台并且一次性需要清空一个桶:您可以浏览到您的桶然后选择顶部键然后滚动到底部然后按键盘上的shift然后单击底部一。它将在中间选择所有,然后您可以右键单击并删除。

答案 9 :(得分:3)

我想最简单的方法是使用{3}},这是Amazon S3的免费在线文件管理器。没有要安装的应用程序,没有第三方网站注册。直接从Amazon S3运行,安全便捷。

只需选择您的存储桶并点击删除即可。

答案 10 :(得分:3)

可用于避免此问题的一种技术是将所有对象放在存储桶中的“文件夹”中,允许您只删除文件夹然后继续删除存储桶。此外,http://s3tools.org提供的s3cmd工具可用于删除包含文件的存储桶:

s3cmd rb --force s3://bucket-name

答案 11 :(得分:1)

我用Python编写了一个脚本,它成功删除了我的9000个对象。见本页:

https://efod.se/blog/archive/2009/08/09/delete-s3-bucket

答案 12 :(得分:1)

亚马逊最近添加了一项新功能“多对象删除”,它允许使用单个API请求一次删除多达1,000个对象。这应该允许简化从桶中删除大量文件的过程。

此功能的文档可在此处找到:http://docs.amazonwebservices.com/AmazonS3/latest/dev/DeletingMultipleObjects.html

答案 13 :(得分:1)

还有一个无耻的插件:当我不得不删除250,000个项目时,我已经厌倦了等待单个HTTP删除请求,所以我编写了一个Ruby脚本来执行多线程并在很短的时间内完成:

http://github.com/sfeley/s3nuke/

由于处理线程的方式,这在Ruby 1.9中运行得更快。

答案 14 :(得分:1)

我是Bucket Explorer团队的开发团队成员之一,我们将根据用户选择提供不同的删除Bucket的选项... 1)快速删除 - 此选项将以1000个块的形式从桶中删除您的数据。 2)永久删除 - 此选项将删除队列中的对象。

How to delete Amazon S3 files and bucket?

答案 15 :(得分:1)

这是一个难题。我的解决方案是http://stuff.mit.edu/~jik/software/delete-s3-bucket.pl.txt。它描述了我所确定的所有可能在顶部的评论中出错的事情。这是脚本的当前版本(如果我更改它,我会在URL上放置一个新版本,但可能不在这里)。

#!/usr/bin/perl

# Copyright (c) 2010 Jonathan Kamens.
# Released under the GNU General Public License, Version 3.
# See <http://www.gnu.org/licenses/>.

# $Id: delete-s3-bucket.pl,v 1.3 2010/10/17 03:21:33 jik Exp $

# Deleting an Amazon S3 bucket is hard.
#
# * You can't delete the bucket unless it is empty.
#
# * There is no API for telling Amazon to empty the bucket, so you have to
# delete all of the objects one by one yourself.
#
# * If you've recently added a lot of large objects to the bucket, then they
# may not all be visible yet on all S3 servers. This means that even after the
# server you're talking to thinks all the objects are all deleted and lets you
# delete the bucket, additional objects can continue to propagate around the S3
# server network. If you then recreate the bucket with the same name, those
# additional objects will magically appear in it!
# 
# It is not clear to me whether the bucket delete will eventually propagate to
# all of the S3 servers and cause all the objects in the bucket to go away, but
# I suspect it won't. I also suspect that you may end up continuing to be
# charged for these phantom objects even though the bucket they're in is no
# longer even visible in your S3 account.
#
# * If there's a CR, LF, or CRLF in an object name, then it's sent just that
# way in the XML that gets sent from the S3 server to the client when the
# client asks for a list of objects in the bucket. Unfortunately, the XML
# parser on the client will probably convert it to the local line ending
# character, and if it's different from the character that's actually in the
# object name, you then won't be able to delete it. Ugh! This is a bug in the
# S3 protocol; it should be enclosing the object names in CDATA tags or
# something to protect them from being munged by the XML parser.
#
# Note that this bug even affects the AWS Web Console provided by Amazon!
#
# * If you've got a whole lot of objects and you serialize the delete process,
# it'll take a long, long time to delete them all.

use threads;
use strict;
use warnings;

# Keys can have newlines in them, which screws up the communication
# between the parent and child processes, so use URL encoding to deal
# with that. 
use CGI qw(escape unescape); # Easiest place to get this functionality.
use File::Basename;
use Getopt::Long;
use Net::Amazon::S3;

my $whoami = basename $0;
my $usage = "Usage: $whoami [--help] --access-key-id=id --secret-access-key=key
 --bucket=name [--processes=#] [--wait=#] [--nodelete]

    Specify --processes to indicate how many deletes to perform in
    parallel. You're limited by RAM (to hold the parallel threads) and
    bandwidth for the S3 delete requests.

    Specify --wait to indicate seconds to require the bucket to be verified
    empty. This is necessary if you create a huge number of objects and then
    try to delete the bucket before they've all propagated to all the S3
    servers (I've seen a huge backlog of newly created objects take *hours* to
    propagate everywhere). See the comment at the top of the script for more
    information about this issue.

    Specify --nodelete to empty the bucket without actually deleting it.\n";

my($aws_access_key_id, $aws_secret_access_key, $bucket_name, $wait);
my $procs = 1;
my $delete = 1;

die if (! GetOptions(
       "help" => sub { print $usage; exit; },
       "access-key-id=s" => \$aws_access_key_id,
       "secret-access-key=s" => \$aws_secret_access_key,
       "bucket=s" => \$bucket_name,
       "processess=i" => \$procs,
       "wait=i" => \$wait,
       "delete!" => \$delete,
 ));
die if (! ($aws_access_key_id && $aws_secret_access_key && $bucket_name));

my $increment = 0;

print "Incrementally deleting the contents of $bucket_name\n";

$| = 1;

my(@procs, $current);
for (1..$procs) {
    my($read_from_parent, $write_to_child);
    my($read_from_child, $write_to_parent);
    pipe($read_from_parent, $write_to_child) or die;
    pipe($read_from_child, $write_to_parent) or die;
    threads->create(sub {
 close($read_from_child);
 close($write_to_child);
 my $old_select = select $write_to_parent;
 $| = 1;
 select $old_select;
 &child($read_from_parent, $write_to_parent);
      }) or die;
    close($read_from_parent);
    close($write_to_parent);
    my $old_select = select $write_to_child;
    $| = 1;
    select $old_select;
    push(@procs, [$read_from_child, $write_to_child]);
}

my $s3 = Net::Amazon::S3->new({aws_access_key_id => $aws_access_key_id,
          aws_secret_access_key => $aws_secret_access_key,
          retry => 1,
         });
my $bucket = $s3->bucket($bucket_name);

my $deleted = 1;
my $total_deleted = 0;
my $last_start = time;
my($start, $waited);
while ($deleted > 0) {
    $start = time;
    print "\nLoading ", ($increment ? "up to $increment" :
    "as many as possible")," keys...\n";
    my $response = $bucket->list({$increment ? ('max-keys' => $increment) : ()})
 or die $s3->err . ": " . $s3->errstr . "\n";
    $deleted = scalar(@{ $response->{keys} }) ;
    if (! $deleted) {
 if ($wait and ! $waited) {
     my $delta = $wait - ($start - $last_start);
     if ($delta > 0) {
  print "Waiting $delta second(s) to confirm bucket is empty\n";
  sleep($delta);
  $waited = 1;
  $deleted = 1;
  next;
     }
     else {
  last;
     }
 }
 else {
     last;
 }
    }
    else {
 $waited = undef;
    }
    $total_deleted += $deleted;
    print "\nDeleting $deleted keys($total_deleted total)...\n";
    $current = 0;
    foreach my $key ( @{ $response->{keys} } ) {
 my $key_name = $key->{key};
 while (! &send(escape($key_name) . "\n")) {
     print "Thread $current died\n";
     die "No threads left\n" if (@procs == 1);
     if ($current == @procs-1) {
  pop @procs;
  $current = 0;
     }
     else {
  $procs[$current] = pop @procs;
     }
 }
 $current = ($current + 1) % @procs;
 threads->yield();
    }
    print "Sending sync message\n";
    for ($current = 0; $current < @procs; $current++) {
 if (! &send("\n")) {
     print "Thread $current died sending sync\n";
     if ($current = @procs-1) {
  pop @procs;
  last;
     }
     $procs[$current] = pop @procs;
     $current--;
 }
 threads->yield();
    }
    print "Reading sync response\n";
    for ($current = 0; $current < @procs; $current++) {
 if (! &receive()) {
     print "Thread $current died reading sync\n";
     if ($current = @procs-1) {
  pop @procs;
  last;
     }
     $procs[$current] = pop @procs;
     $current--;
 }
 threads->yield();
    }    
}
continue {
    $last_start = $start;
}

if ($delete) {
    print "Deleting bucket...\n";
    $bucket->delete_bucket or die $s3->err . ": " . $s3->errstr;
    print "Done.\n";
}

sub send {
    my($str) = @_;
    my $fh = $procs[$current]->[1];
    print($fh $str);
}

sub receive {
    my $fh = $procs[$current]->[0];
    scalar <$fh>;
}

sub child {
    my($read, $write) = @_;
    threads->detach();
    my $s3 = Net::Amazon::S3->new({aws_access_key_id => $aws_access_key_id,
       aws_secret_access_key => $aws_secret_access_key,
       retry => 1,
      });
    my $bucket = $s3->bucket($bucket_name);
    while (my $key = <$read>) {
 if ($key eq "\n") {
     print($write "\n") or die;
     next;
 }
 chomp $key;
 $key = unescape($key);
 if ($key =~ /[\r\n]/) {
     my(@parts) = split(/\r\n|\r|\n/, $key, -1);
     my(@guesses) = shift @parts;
     foreach my $part (@parts) {
  @guesses = (map(($_ . "\r\n" . $part,
     $_ . "\r"   . $part,
     $_ . "\n"   . $part), @guesses));
     }
     foreach my $guess (@guesses) {
  if ($bucket->get_key($guess)) {
      $key = $guess;
      last;
  }
     }
 }
 $bucket->delete_key($key) or
     die $s3->err . ": " . $s3->errstr . "\n";
 print ".";
 threads->yield();
    }
    return;
}

答案 16 :(得分:0)

这就是我使用的。只是简单的红宝石代码。

case bucket.size
  when 0
    puts "Nothing left to delete"
  when 1..1000
    bucket.objects.each do |item|
      item.delete
      puts "Deleting - #{bucket.size} left"        
    end
end

答案 17 :(得分:0)

首先删除存储桶中的所有对象。然后您可以删除存储桶本身。

显然,无法删除包含对象的存储桶,S3Fox不会为您执行此操作。

我自己也遇到过S3Fox的其他一些小问题,现在使用基于Java的工具jets3t,它更关注错误情况。还必须有其他人。

答案 18 :(得分:0)

尝试使用https://s3explorer.appspot.com/管理您的S3帐户。

答案 19 :(得分:0)

我总是最终使用他们的C#API和小脚本来完成这项工作。我不确定S3Fox为什么不能这样做,但此功能似乎在其中被破坏了。我相信很多其他S3工具也可以做到这一点。

答案 20 :(得分:0)

我将不得不看看其中一些替代文件管理器。我已经使用了(并且喜欢)BucketExplorer,你可以从中得到 - 令人惊讶的是 - http://www.bucketexplorer.com/

这是一个30天的免费试用,然后(目前)每个许可证的成本为49.99美元(购买封面上的价格为49.95美元)。

答案 21 :(得分:0)

您必须确保为存储桶设置了正确的写入权限,并且存储桶中不包含任何对象。 一些有用的工具可以帮助您删除:CrossFTP,查看和删除像FTP客户端一样的存储桶。 jets3t如上所述的工具。

答案 22 :(得分:-2)

使用亚马逊网络管理控制台。借助Google Chrome提高速度。删除对象的速度比firefox快很多(大约快10倍)。有6万个要删除的对象。

相关问题