我有一个Laravel项目,每天使用spatie/laravel-backup创建一个新备份并将其上传到s3。它已经正确配置,并且已经运行了一年多时间了。
突然,由于以下错误,备份无法完成上传过程:
global:
scrape_interval: 15s
scrape_configs:
- job_name: 'prometheus'
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
- job_name: 'node_exporter'
scrape_interval: 5s
static_configs:
- targets: ['localhost:9100']
- job_name: 'flink'
scrape_interval: 5s
static_configs:
- targets: ['localhost:9250', 'localhost:9251', '192.168.56.20:9250']
我尝试跑步:
Copying zip failed because: An exception occurred while uploading parts to a multipart upload. The following parts had errors:
- Part 17: Error executing "UploadPart" on "https://s3.eu-west-1.amazonaws.com/my.bucket/Backups/2019-04-01-09-47-33.zip?partNumber=17&uploadId=uploadId"; AWS HTTP error: cURL error 55: SSL_write() returned SYSCALL, errno = 104 (see http://curl.haxx.se/libcurl/c/libcurl-errors.html) (server): 100 Continue -
- Part 16: Error executing "UploadPart" on "https://s3.eu-west-1.amazonaws.com/my.bucket/Backups/2019-04-01-09-47-33.zip?partNumber=16&uploadId=uploadId"; AWS HTTP error: Client error: `PUT https://s3.eu-west-1.amazonaws.com/my.bucket/Backups/2019-04-01-09-47-33.zip?partNumber=16&uploadId=uploadId` resulted in a `400 Bad Request` response:
<?xml version="1.0" encoding="UTF-8"?>
<Code>RequestTimeout</Code><Message>Your socket connection to the server w (truncated...)
RequestTimeout (client): Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed. - <?xml version="1.0" encoding="UTF-8"?>
<Code>RequestTimeout</Code>
<Message>Your socket connection to the server was not read from or written to within the timeout period. Idle connections will be closed.</Message>
<RequestId>RequestId..</RequestId>
<HostId>Host id..</HostId>
它们都正常工作。我的猜测是该错误是由整个zip文件大小(大约145MB)引起的,这可以解释为什么以前从未出现过(当备份大小较小时)。 laravel-backup软件包具有相关的issue,但我认为这不是库的问题,该库仅使用基础s3 flysystem接口上载zip。
我应该在php artisan backup:run --only-db // 110MB zip file
php artisan backup:run --only-files // 34MB zip file
上设置一些参数(例如,增加curl上传文件的大小),还是系统将文件分成多个块?
答案 0 :(得分:1)
您可以尝试在S3Client(https://docs.aws.amazon.com/pt_br/sdk-for-php/v3/developer-guide/guide_configuration.html)中添加timeout
参数
赞:
$s3 = new Aws\S3\S3Client([
'version' => 'latest',
'region' => 'us-west-2',
'credentials' => $credentials,
'http' => [
'timeout' => 360
]
]);
但是在Laravel中,您应该在config/filesystems.php
中这样做:
'disks' => [
's3' => [
'driver' => 's3',
'key' => env('AWS_ACCESS_KEY_ID'),
'secret' => env('AWS_SECRET_ACCESS_KEY'),
'region' => 'us-east-1',
'bucket' => env('FILESYSTEM_S3_BUCKET'),
'http' => [
'timeout' => 360
]
]
]