跨账户AWS CodePipeline无法访问CloudFormation部署工件

时间:2018-12-19 14:21:10

标签: amazon-web-services aws-codepipeline aws-codebuild aws-kms multiple-accounts

我有一个跨帐户管道在帐户CI中运行,该管道通过 CloudFormation在另一个帐户DEV 中部署资源。 部署后,我将工件输出另存为JSON文件,并希望通过CodeBuild在另一个管道操作中对其进行访问。 CodeBuild在DOWNLOAD_SOURCE阶段失败,并显示以下消息:

  

CLIENT_ERROR:访问拒绝:访问被拒绝,状态码:403,请求   ID:123456789,主机ID:xxxxx / yyyy / zzzz / xxxx =用于主要来源,   源版本arn:aws:s3 ::: my-bucket / my-pipeline / DeployArti / XcUNqOP

问题可能是CloudFormation在不同帐户中执行时,使用与管道本身不同的密钥来加密工件。

是否可以为CloudFormation提供一个显式的KMS密钥来加密工件,或以其他任何方式将这些工件返回到管道中?

在单个帐户中执行所有操作。

这是我的代码段(在CI帐户中部署):

  MyCodeBuild:
    Type: AWS::CodeBuild::Project
    Properties:
      Artifacts:
        Type: CODEPIPELINE
      Environment: ...
      Name: !Sub "my-codebuild"
      ServiceRole: !Ref CodeBuildRole
      EncryptionKey: !GetAtt KMSKey.Arn
      Source:
        Type: CODEPIPELINE
        BuildSpec: ...

  CrossAccountCodePipeline:
    Type: AWS::CodePipeline::Pipeline
    Properties:
      Name: "my-pipeline"
      RoleArn: !GetAtt CodePipelineRole.Arn
      Stages:
      - Name: Source
        ...
      - Name: StagingDev
        Actions:
        - Name: create-stack-in-DEV-account
          InputArtifacts:
          - Name: SourceArtifact
          OutputArtifacts:
          - Name: DeployArtifact
          ActionTypeId:
            Category: Deploy
            Owner: AWS
            Version: "1"
            Provider: CloudFormation
          Configuration:
            StackName: "my-dev-stack"
            ChangeSetName: !Sub "my-changeset"
            ActionMode: CREATE_UPDATE
            Capabilities: CAPABILITY_NAMED_IAM
            # this is the artifact I want to access from the next action 
            # within this CI account pipeline
            OutputFileName: "my-DEV-output.json"   
            TemplatePath: !Sub "SourceArtifact::stack/my-stack.yml"
            RoleArn: !Sub "arn:aws:iam::${DevAccountId}:role/dev-cloudformation-role"
          RoleArn: !Sub "arn:aws:iam::${DevAccountId}:role/dev-cross-account-role"
          RunOrder: 1
        - Name: process-DEV-outputs
          InputArtifacts:
          - Name: DeployArtifact
          ActionTypeId:
            Category: Build
            Owner: AWS
            Version: "1"
            Provider: CodeBuild
          Configuration:
            ProjectName: !Ref MyCodeBuild
          RunOrder: 2
      ArtifactStore:
        Type: S3
        Location: !Ref S3ArtifactBucket
        EncryptionKey:
          Id: !GetAtt KMSKey.Arn
          Type: KMS

4 个答案:

答案 0 :(得分:2)

CloudFormation生成输出工件,将其压缩,然后将文件上传到S3。 它不添加ACL,该ACL授予存储桶所有者的访问权限。因此,当您尝试进一步使用CloudFormation输出工件时,会得到403。

解决方法是在执行CLoudFormation操作之后立即在管道中执行其他一项操作,例如:Lambda函数可以承担目标帐户角色并更新对象acl,例如:bucket-owner-full-control。

答案 1 :(得分:0)

mockora的答案是正确的。这是解决该问题的Python示例Lambda函数,您可以在跨帐户CloudFormation部署后立即将其配置为Invoke操作。

在此示例中,您将Lambda调用操作用户参数设置配置为希望Lambda函数在远程帐户中承担的角色的ARN,以修复S3对象ACL。显然,您的Lambda函数将对该角色具有sts:AssumeRole权限,而远程帐户角色将对管道存储区工件具有s3:PutObjectAcl权限。

import os
import logging, datetime, json
import boto3
from aws_xray_sdk.core import xray_recorder
from aws_xray_sdk.core import patch_all

# X-Ray
patch_all()

# Configure logging
logging.basicConfig()
log = logging.getLogger()
log.setLevel(os.environ.get('LOG_LEVEL','INFO'))
def format_json(data):
  return json.dumps(data, default=lambda d: d.isoformat() if isinstance(d, datetime.datetime) else str(d))

# Boto3 Client
client = boto3.client
codepipeline = client('codepipeline')
sts = client('sts')

# S3 Object ACLs Handler
def s3_acl_handler(event, context):
  log.info(f'Received event: {format_json(event)}')
  # Get Job
  jobId = event['CodePipeline.job']['id']
  jobData = event['CodePipeline.job']['data']
  # Ensure we return a success or failure result
  try:
    # Assume IAM role from user parameters
    credentials = sts.assume_role(
      RoleArn=jobData['actionConfiguration']['configuration']['UserParameters'],
      RoleSessionName='codepipeline',
      DurationSeconds=900
    )['Credentials']
    # Create S3 client from assumed role credentials
    s3 = client('s3',
      aws_access_key_id=credentials['AccessKeyId'],
      aws_secret_access_key=credentials['SecretAccessKey'],
      aws_session_token=credentials['SessionToken']
    )
    # Set S3 object ACL for each input artifact
    for inputArtifact in jobData['inputArtifacts']:
      s3.put_object_acl(
        ACL='bucket-owner-full-control',
        Bucket=inputArtifact['location']['s3Location']['bucketName'],
        Key=inputArtifact['location']['s3Location']['objectKey']
      )
    codepipeline.put_job_success_result(jobId=jobId)
  except Exception as e:
    logging.exception('An exception occurred')
    codepipeline.put_job_failure_result(
      jobId=jobId,
      failureDetails={'type': 'JobFailed','message': getattr(e, 'message', repr(e))}
    )

答案 2 :(得分:0)

几年来,我一直在使用 CodePipeline 进行跨账户部署。我什至有关于使用组织简化流程的GitHub project。它有几个关键要素。

  1. 确保您的 S3 存储桶使用的是 CMK,而不是默认加密密钥。
  2. 确保您向要部署的帐户授予对该密钥的访问权限。例如,当您有一个 CloudFormation 模板在与模板所在的帐户不同的帐户上运行时,该帐户上使用的角色需要有权访问密钥(和 S3 存储桶)。

它肯定比这更复杂,但我绝不会运行 lambda 来更改工件的对象所有者。 Create a pipeline in CodePipeline that uses resources from another AWS account 提供了有关您需要做什么才能使其正常工作的详细信息。

答案 3 :(得分:-1)

CloudFormation应该使用管道的工件存储库定义中提供的KMS加密密钥:https://docs.aws.amazon.com/codepipeline/latest/APIReference/API_ArtifactStore.html#CodePipeline-Type-ArtifactStore-encryptionKey

因此,只要您在此给它一个自定义密钥,并允许其他帐户也使用该密钥,它就应该起作用。

此文档主要涵盖以下内容:https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create-cross-account.html