我正在使用Bitbuckets Pipeline。我希望它将我的仓库(非常小)的全部内容推送到S3。我不想将其压缩,推送到S3然后解压缩。我只是希望它在我的Bitbucket仓库中采用现有的文件/文件夹结构并将其推送到S3。
yaml文件和.py文件应该是什么样的?
这是当前的yaml文件:
image: python:3.5.1
pipelines:
branches:
master:
- step:
script:
# - apt-get update # required to install zip
# - apt-get install -y zip # required if you want to zip repository objects
- pip install boto3==1.3.0 # required for s3_upload.py
# the first argument is the name of the existing S3 bucket to upload the artefact to
# the second argument is the artefact to be uploaded
# the third argument is the the bucket key
# html files
- python s3_upload.py my-bucket-name html/index_template.html html/index_template.html # run the deployment script
# Example command line parameters. Replace with your values
#- python s3_upload.py bb-s3-upload SampleApp_Linux.zip SampleApp_Linux # run the deployment script
这是我目前的python:
from __future__ import print_function
import os
import sys
import argparse
import boto3
from botocore.exceptions import ClientError
def upload_to_s3(bucket, artefact, bucket_key):
"""
Uploads an artefact to Amazon S3
"""
try:
client = boto3.client('s3')
except ClientError as err:
print("Failed to create boto3 client.\n" + str(err))
return False
try:
client.put_object(
Body=open(artefact, 'rb'),
Bucket=bucket,
Key=bucket_key
)
except ClientError as err:
print("Failed to upload artefact to S3.\n" + str(err))
return False
except IOError as err:
print("Failed to access artefact in this directory.\n" + str(err))
return False
return True
def main():
parser = argparse.ArgumentParser()
parser.add_argument("bucket", help="Name of the existing S3 bucket")
parser.add_argument("artefact", help="Name of the artefact to be uploaded to S3")
parser.add_argument("bucket_key", help="Name of the S3 Bucket key")
args = parser.parse_args()
if not upload_to_s3(args.bucket, args.artefact, args.bucket_key):
sys.exit(1)
if __name__ == "__main__":
main()
这要求我将yaml文件中repo中的每个文件列为另一个命令。我只想让它抓住所有内容并将其上传到S3。
答案 0 :(得分:6)
以下对我有用,这里是我的yaml文件,其中包含一个带有官方aws命令行工具的docker镜像:cgswong/aws。非常方便,比bitbucket推荐的更强大(abesiyo / s3)。
image: cgswong/aws
pipelines:
branches:
master:
- step:
script:
- aws s3 --region "us-east-1" sync public/ s3://static-site-example.activo.com --cache-control "public, max-age=14400" --delete
一些注意事项:
这里有完整的文章:Continuous Deployment with Bitbucket Pipelines, S3, and CloudFront
答案 1 :(得分:1)
您可以更改为使用泊坞窗https://hub.docker.com/r/abesiyo/s3/
运行得很好
到位桶-pipelines.yml
image: abesiyo/s3
pipelines:
default:
- step:
script:
- s3 --region "us-east-1" rm s3://<bucket name>
- s3 --region "us-east-1" sync . s3://<bucket name>
还请在bitbucket管道上设置环境变量 AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY
答案 2 :(得分:1)
为了将静态网站部署到Amazon S3,我有这个bitbucket-pipelines.yml配置文件:
image: attensee/s3_website
pipelines:
default:
- step:
script:
- s3_website push
我正在使用attensee / s3_website docker镜像,因为那个安装了令人敬畏的s3_website工具。 s3_website(s3_website.yml)的配置文件[在Bitbucket的存储库的根目录中创建此文件]如下所示:
s3_id: <%= ENV['S3_ID'] %>
s3_secret: <%= ENV['S3_SECRET'] %>
s3_bucket: bitbucket-pipelines
site : .
我们必须在环境变量中定义环境变量S3_ID和S3_SECRET,从位桶设置
感谢https://www.savjee.be/2016/06/Deploying-website-to-ftp-or-amazon-s3-with-BitBucket-Pipelines/ 解决方案
答案 3 :(得分:0)
Atlassian现在提供“管道”以简化一些常见任务的配置。还有one for S3 upload。
无需指定其他图像类型:
image: node:8
pipelines:
branches:
master:
- step:
script:
- pipe: atlassian/aws-s3-deploy:0.2.1
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: "us-east-1"
S3_BUCKET: "your.bucket.name"
LOCAL_PATH: "dist"