我正在使用最新版本的官方Amazon S3 SDK(1.0.14.1)来创建备份工具。到目前为止,如果我上传的文件大小低于5 MB,一切正常,但当任何文件大于5 MB时,上传失败,但有以下异常:
System.Net.WebException:请求 中止了:请求被取消了。 ---> System.IO.IOException:在所有字节都没有之前无法关闭流 书面。在 System.Net.ConnectStream.CloseInternal(布尔 internalCall,布尔中止)--- 内部异常堆栈跟踪结束--- 在 Amazon.S3.AmazonS3Client.ProcessRequestError(字符串 actionName,HttpWebRequest请求, WebException我们,HttpWebResponse errorResponse,String requestAddr, WebHeaderCollection&安培; respHdrs,类型t) 在 Amazon.S3.AmazonS3Client.Invoke [T](S3Request userRequest)at Amazon.S3.AmazonS3Client.PutObject(PutObjectRequest 请求) BackupToolkit.S3Module.UploadFile(字符串 sourceFileName,String destinationFileName)in 女:\代码\ AutoBackupTool \ BackupToolkit \ S3Module.cs:行 88点 BackupToolkit.S3Module.UploadFiles(字符串 sourceDirectory)中 女:\代码\ AutoBackupTool \ BackupToolkit \ S3Module.cs:行 108
注意:5 MB大致是失败的边界,可能会略低或更高
我假设连接超时,并且在文件上载完成之前流自动关闭。
我试图找到一种设置长超时的方法(但我无法在AmazonS3
或AmazonS3Config
中找到该选项)。
关于如何增加超时的任何想法(比如我可以使用的应用程序范围设置)还是与超时问题无关?
代码:
var s3Client = AWSClientFactory.CreateAmazonS3Client(AwsAccessKey, AwsSecretKey);
var putObjectRequest = new PutObjectRequest {
BucketName = Bucket,
FilePath = sourceFileName,
Key = destinationFileName,
MD5Digest = md5Base64,
GenerateMD5Digest = true
};
using (var upload = s3Client.PutObject(putObjectRequest)) { }
答案 0 :(得分:42)
更新回答:
我最近更新了一个使用Amazon AWS .NET SDK的项目(版本 1.4.1.0 ),在这个版本中有两个改进,当我写下原始答案时不存在这里。
Timeout
设置为-1
,以便为放置操作设置无限期限。PutObjectRequest
上有一个名为ReadWriteTimeout
的额外属性,可以设置(以毫秒为单位)在流读/写级别超时而不是整个放置操作级别。所以我的代码现在看起来像这样:
var putObjectRequest = new PutObjectRequest {
BucketName = Bucket,
FilePath = sourceFileName,
Key = destinationFileName,
MD5Digest = md5Base64,
GenerateMD5Digest = true,
Timeout = -1,
ReadWriteTimeout = 300000 // 5 minutes in milliseconds
};
原始回答:
我设法找到答案......
在发布问题之前,我已经探讨了AmazonS3
和AmazonS3Config
,但没有PutObjectRequest
。
在PutObjectRequest
内,有一个Timeout
属性(以毫秒为单位)。我已成功使用它来上传较大的文件(注意:将其设置为0不会删除超时,您需要指定正数毫秒......我已经走了1小时)。
这很好用:
var putObjectRequest = new PutObjectRequest {
BucketName = Bucket,
FilePath = sourceFileName,
Key = destinationFileName,
MD5Digest = md5Base64,
GenerateMD5Digest = true,
Timeout = 3600000
};
答案 1 :(得分:10)
我遇到过类似的问题,并开始使用TransferUtility类来执行多部分上传。
此代码正在运行。当超时设置得太低时我确实遇到了问题!
var request = new TransferUtilityUploadRequest()
.WithBucketName(BucketName)
.WithFilePath(sourceFile.FullName)
.WithKey(key)
.WithTimeout(100 * 60 * 60 * 1000)
.WithPartSize(10 * 1024 * 1024)
.WithSubscriber((src, e) =>
{
Console.CursorLeft = 0;
Console.Write("{0}: {1} of {2} ", sourceFile.Name, e.TransferredBytes, e.TotalBytes);
});
utility.Upload(request);
当我输入此内容时,我上传了4GB内容,而且它已经比以往任何时候都更进一步!
答案 2 :(得分:7)
AWS SDK for .NET具有两个主要的API ,可与Amazon S3配合使用。它们可以在S3上传大小文件。
<强> 1。低级API:
低级API使用与其他服务相同的模式 SDK中的低级API。有一个名为的客户端对象 实现IAmazonS3接口的 AmazonS3Client 。它包含 用于S3公开的每个服务操作的方法。
命名空间: Amazon.S3,Amazon.S3.Model
// Step 1 :
AmazonS3Config s3Config = new AmazonS3Config();
s3Config.RegionEndpoint = GetRegionEndPoint();
// Step 2 :
using(var client = new AmazonS3Client(My_AWSAccessKey, My_AWSSecretKey, s3Config) )
{
// Step 3 :
PutObjectRequest request = new PutObjectRequest();
request.Key = My_key;
request.InputStream = My_fileStream;
request.BucketName = My_BucketName;
// Step 4 : Finally place object to S3
client.PutObject(request);
}
<强> 2。 TransferUtility:(我建议使用此API)
TransferUtility运行在低级API之上。放和 将对象引入S3,它是一个 用于处理S3最常见用途的简单界面。最大的 放置物品带来好处。例如,TransferUtility 检测文件是否很大并切换到分段上传模式。
命名空间: Amazon.S3.Transfer
// Step 1 : Create "Transfer Utility" (replacement of old "Transfer Manager")
TransferUtility fileTransferUtility =
new TransferUtility(new AmazonS3Client(Amazon.RegionEndpoint.USEast1));
// Step 2 : Create Request object
TransferUtilityUploadRequest uploadRequest =
new TransferUtilityUploadRequest
{
BucketName = My_BucketName,
FilePath = My_filePath,
Key = My_keyName
};
// Step 3 : Event Handler that will be automatically called on each transferred byte
uploadRequest.UploadProgressEvent +=
new EventHandler<UploadProgressArgs>
(uploadRequest_UploadPartProgressEvent);
static void uploadRequest_UploadPartProgressEvent(object sender, UploadProgressArgs e)
{
Console.WriteLine("{0}/{1}", e.TransferredBytes, e.TotalBytes);
}
// Step 4 : Hit upload and send data to S3
fileTransferUtility.Upload(uploadRequest);
答案 3 :(得分:6)
Nick Randell在这方面有了正确的想法,在这里发布了另一个例子,其中包含一些替代事件处理,以及一种为上传文件获得完成百分比的方法:
private static string WritingLargeFile(AmazonS3 client, int mediaId, string bucketName, string amazonKey, string fileName, string fileDesc, string fullPath)
{
try
{
Log.Add(LogTypes.Debug, mediaId, "WritingLargeFile: Create TransferUtilityUploadRequest");
var request = new TransferUtilityUploadRequest()
.WithBucketName(bucketName)
.WithKey(amazonKey)
.WithMetadata("fileName", fileName)
.WithMetadata("fileDesc", fileDesc)
.WithCannedACL(S3CannedACL.PublicRead)
.WithFilePath(fullPath)
.WithTimeout(100 * 60 * 60 * 1000) //100 min timeout
.WithPartSize(5 * 1024 * 1024); // Upload in 5MB pieces
request.UploadProgressEvent += new EventHandler<UploadProgressArgs>(uploadRequest_UploadPartProgressEvent);
Log.Add(LogTypes.Debug, mediaId, "WritingLargeFile: Create TransferUtility");
TransferUtility fileTransferUtility = new TransferUtility(ConfigurationManager.AppSettings["AWSAccessKey"], ConfigurationManager.AppSettings["AWSSecretKey"]);
Log.Add(LogTypes.Debug, mediaId, "WritingLargeFile: Start Upload");
fileTransferUtility.Upload(request);
return amazonKey;
}
catch (AmazonS3Exception amazonS3Exception)
{
if (amazonS3Exception.ErrorCode != null &&
(amazonS3Exception.ErrorCode.Equals("InvalidAccessKeyId") ||
amazonS3Exception.ErrorCode.Equals("InvalidSecurity")))
{
Log.Add(LogTypes.Debug, mediaId, "Please check the provided AWS Credentials.");
}
else
{
Log.Add(LogTypes.Debug, mediaId, String.Format("An error occurred with the message '{0}' when writing an object", amazonS3Exception.Message));
}
return String.Empty; //Failed
}
}
private static Dictionary<string, int> uploadTracker = new Dictionary<string, int>();
static void uploadRequest_UploadPartProgressEvent(object sender, UploadProgressArgs e)
{
TransferUtilityUploadRequest req = sender as TransferUtilityUploadRequest;
if (req != null)
{
string fileName = req.FilePath.Split('\\').Last();
if (!uploadTracker.ContainsKey(fileName))
uploadTracker.Add(fileName, e.PercentDone);
//When percentage done changes add logentry:
if (uploadTracker[fileName] != e.PercentDone)
{
uploadTracker[fileName] = e.PercentDone;
Log.Add(LogTypes.Debug, 0, String.Format("WritingLargeFile progress: {1} of {2} ({3}%) for file '{0}'", fileName, e.TransferredBytes, e.TotalBytes, e.PercentDone));
}
}
}
public static int GetAmazonUploadPercentDone(string fileName)
{
if (!uploadTracker.ContainsKey(fileName))
return 0;
return uploadTracker[fileName];
}
答案 4 :(得分:1)
请在此处查看此主题How to upload a file to amazon S3 super easy using c#,其中包括要下载的演示项目。 它是使用AWS sdk .net 3.5(及更高版本)的高级别,可以使用以下代码来使用它:
// preparing our file and directory names
string fileToBackup = @"d:\mybackupFile.zip" ; // test file
string myBucketName = "mys3bucketname"; //your s3 bucket name goes here
string s3DirectoryName = "justdemodirectory";
string s3FileName = @"mybackupFile uploaded in 12-9-2014.zip";
AmazonUploader myUploader = new AmazonUploader();
myUploader.sendMyFileToS3(fileToBackup, myBucketName, s3DirectoryName, s3FileName);