是否找到了一个很好的系统,可以通过codedeploy将头盔(3)图表部署到EKS?我没有找到与搜索完全匹配的内容,想在滚动自己的内容之前进行检查。
到目前为止的研究:
kubectl
或helm
二进制文件的问题,但我使用的是头盔。因此,似乎最好的机会是从最后一个选项开始,创建自己的头盔3层,让代码构建生成工件(如头盔图表和kube配置),并在其中修改Helm
lambda快速入门来使用它们,然后在CodeDeploy中从该lambda启动helm update
。这是一个合理的策略吗?
这项任务似乎很明显。 Kubernetes是一个大问题。舵手很重要。 CI / CD很重要。因此,似乎有大量的AWS用户可能想要这样做。但是,没有明确的最佳实践可以遵循。
答案 0 :(得分:0)
我同意你的意见,这是一个差距。 CodeDeploy的部署集成非常紧密,即只能部署到:
到目前为止,还没有EKS部署选项。
在没有本机集成的情况下,为达到要求而做的任何事情最多都是“ hack”。从CodeDeploy架构来看,它甚至不适合此类黑客。相反,我建议您使用CodeBuild并在buildspec中自己运行helm命令。有关将CodeBuild连接到EKS的信息,请参见此答案[1]。可能还有其他类似的选项,例如使用CodePipeline + Jenkns,但是想法是相同的。
答案 1 :(得分:0)
这就是我最终要做的。为了使用lambda函数进行部署,我需要kubectl
和helm
的图层。 AWS EKS Quickstart具有良好的kubectl
层,但是its helm layer不是头盔3,所以我自己做:
docker build ./lambdas/layers/helm -t makehelm:latest
pushd lambda/layers/helm
mkdir helm/lambda/bin
docker run -v $PWD/lambda/bin:/out makehelm:latest cp -R /usr/local/bin/helm /out/
zip -r lambda.zip lambda
lambdas/layers/heml
包含以下Dockerfile
:
FROM amazonlinux:2
RUN yum update -y
RUN yum install -y openssl-devel
RUN yum install -y openssl
RUN yum groupinstall -y "Development Tools"
RUN yum provides /usr/bin/which
RUN yum install -y which
RUN curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh
RUN chmod 700 get_helm.sh
RUN ./get_helm.sh
RUN ls -als /usr/local/bin/helm
下一步是将舵图生成为CodeBuild管道的工件(我的图表模板位于产品源存储库中):
. . .
post_build: {
commands: [
. . .
'docker push "${PRODUCT_REPOSITORY_URI}:${CODEBUILD_RESOLVED_SOURCE_VERSION}"',
'./scripts/make-helm-deployment-values.sh > product-chart/values-dev.yaml',
'./scripts/make-aws-deployment-values.sh > product-chart/templates/aws-resources-configmap.yaml',
'cat product-chart/values-dev.yaml',
'cat product-chart/templates/aws-resources-configmap.yaml',
'zip -r facts_machine_chart.zip product-chart/',
. . .
]
}
},
artifacts: {
'base-directory': '.',
files: ['facts_machine_chart.zip'],
},
. . .
然后,make-*
脚本在那里将从云形成模板派生的参数(即CF模板-> CodeBuild环境变量->脚本从环境中生成EKS配置图->图表中使用的配置图)流入运行中的代码中EKS。
我将它用于CloudFront ARN之类的东西。
下一步,定义lambda,并添加适当的权限和环境:
const helmLayer = new LayerVersion(this, 'helmLayer', {
code: Code.fromAsset(path.join(__dirname, '../lambdas/layers/helm/lambda')),
compatibleRuntimes: [Runtime.PYTHON_3_7],
description: 'helm support',
layerVersionName: 'helmLayer'
});
deployFunction = new Function(this, 'deployFunction', {
runtime: Runtime.PYTHON_3_7,
handler: 'index.handler',
code: Code.fromAsset(__dirname + '/../lambdas/deploy'),
timeout: cdk.Duration.seconds(300),
layers: [kubectlLayer, helmLayer]
});
// lambda created above is passed in in `props.deployLambda`
// helmChartArtifact is the CDK construct mathching the BuildProps artifact declaration of the chart zipfile
props.deployLambda.addEnvironment('EKS_CLUSTER_ROLE_ARN', props.clusterDeveloperRole.roleArn);
props.deployLambda.addEnvironment('EKS_CLUSTER_ARN', props.cluster.clusterArn);
props.deployLambda.addEnvironment('EKS_CLUSTER_ENDPOINT', props.cluster.clusterEndpoint);
const deployAction = new codepipeline_actions.LambdaInvokeAction({
actionName: 'Deploy',
lambda: props.deployLambda,
inputs: [helmChartArtifact]
});
pipeline.addStage({
stageName: 'Deploy',
actions: [deployAction],
});
const kubeConfigSecret = secretsmanager.Secret.fromSecretArn(this, 'ProductDevSecret', 'arn:aws:secretsmanager:us-west-2:947675402426:secret:dev/product/kubeconfig-2XgYxq');
kubeConfigSecret.grantRead(props.deployLambda.role as iam.IRole);
// Must be admin to deploy in our case...
// props.deployLambda.addToRolePolicy(props.clusterDeveloperPolicyStatement);
props.deployLambda.addToRolePolicy(props.clusterAdminPolicyStatement);
props.deployLambda.addToRolePolicy(new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: [
'eks:DescribeCluster'
],
resources: [props.cluster.clusterArn]
}));
我还没有找到一种更优雅的编写方式,因此(手动)我将部署模板后生成的.kubeconfig
存放在SecretsManager中,并使用它来对lambda进行身份验证集群。我希望有一个更优雅的解决方案。
最后是lambda本身。通常,它的工作方式类似于EKS快速入门,但具有以下几点:
secret_string = client.get_secret_value(SecretId='dev/product/kubeconfig')['SecretString']
if not os.path.exists('/tmp/.kube'):
os.mkdir('/tmp/.kube')
kubeconfig_filename = "/tmp/.kube/config"
text_file = open(kubeconfig_filename, "w")
text_file.write(secret_string)
text_file.close()
os.environ["KUBECONFIG"] = kubeconfig_filename
# Extract the Job ID
job_id = event['CodePipeline.job']['id']
# Extract the Job Data
job_data = event['CodePipeline.job']['data']
for currentpath, folders, files in os.walk('/opt'):
for file in sorted(files):
print(os.path.join(currentpath, file))
# with open(kubeconfig_filename, 'r') as fin:
# print(fin.read())
# run_command("cat {}".format(kubeconfig_filename))
print('{}', json.dumps(event))
# Get the list of artifacts passed to the function
artifacts = job_data['inputArtifacts']
# Get the artifact details
artifact_data = find_artifact(artifacts, 'ProductChart')
# Get S3 client to access artifact with
s3 = setup_s3_client(job_data)
# Get the JSON template file out of the artifact
template = get_template(s3, artifact_data)
run_command('kubectl version')
run_command('helm status product')
run_command('helm lint /tmp/product-chart/')
run_command('kubectl delete job db-migrate', True)
# TODO: could be upgrade or install, based on the status above, if we really want full automation
run_command('helm upgrade product /tmp/product-chart/ -f /tmp/product-chart/values-dev.yaml')
put_job_success(job_id, 'success')