我正在尝试创建可以通过AWS Lambda从帐户A中的不同区域接收数据的Firehose流,并输出到帐户B中的红移表。为此,我在帐户A上创建了一个IAM角色:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "firehose.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
我给了它以下权限:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::b-bucket/*",
"arn:aws:s3:::b-bucket"
]
},
{
"Effect": "Allow",
"Action": [
"firehose:*"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"redshift:*"
],
"Resource": "*"
}
]
}
在帐户B上,我使用此信任政策创建了一个角色:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "firehose.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "11111111111"
}
}
}
]
}
我给了该角色以下访问权限:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::b-bucket",
"arn:aws:s3:::b-bucket/*",
"arn:aws:s3:::b-account-logs",
"arn:aws:s3:::b-account-logs/*"
]
},
{
"Effect": "Allow",
"Action": [
"firehose:*"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "redshift:*",
"Resource": "arn:aws:redshift:us-east-1:cluster:account-b-cluster*"
}
]
}
我还编辑了S3存储桶上的访问策略,以便访问我的帐户A角色:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::11111111111:role/AccountAXAccountBPolicy"
},
"Action": "s3:*",
"Resource": ["arn:aws:s3:::b-bucket","arn:aws:s3:::b-bucket/*"]
}
]
}
然而,这一切都无效。当我尝试在帐户A中创建流时,它不会列出帐户B中的存储桶和红移群集。有没有办法让这项工作?
答案 0 :(得分:2)
约翰的答案是半正确的。我建议Redshift群集的帐户所有者创建FireHose流。通过CLI创建需要您提供用户名和密码。让集群所有者创建流并在流上共享IAM角色权限对于安全性和凭证更改更安全。此外,您无法创建访问区域外的数据库的流,因此交付应用程序可以访问正确的流和区域。
请继续阅读以下内容,了解如何创建跨帐户流。
在我的情况下,我可以访问这两个帐户,并且为了减少更改量和监控的便捷性,我在帐户A端创建了流。
上述权限是正确的,但您无法通过AWS控制台创建从帐户A到帐户B的Firehose流。您需要通过AWS Cli执行此操作:
aws firehose create-delivery-stream --delivery-stream-name testFirehoseStreamToRedshift
--redshift-destination-configuration 'RoleARN="arn:aws:iam::11111111111:role/AccountAXAccountBRole", ClusterJDBCURL="jdbc:redshift://<cluster-url>:<cluster-port>/<>",
CopyCommand={DataTableName="<schema_name>.x_test",DataTableColumns="ID1,STRING_DATA1",CopyOptions="csv"},Username="<Cluster_User_name>",Password="<Cluster_Password>",S3Configuration={RoleARN="arn:aws:iam::11111111111:role/AccountAXAccountBRole",
BucketARN="arn:aws:s3:::b-bucket",Prefix="test/",CompressionFormat="UNCOMPRESSED"}'
您可以通过在其他AWS账户上创建测试表来测试:
create table test_schema.x_test
(
ID1 INT8 NOT NULL,
STRING_DATA1 VARCHAR(10) NOT NULL
)
distkey(ID1)
sortkey(ID1,STRING_DATA1);
您可以发送如下测试数据:
aws firehose put-record --delivery-stream-name testFirehoseStreamToRedshift --record '{"DATA":"1,\"ABCDEFGHIJ\""}'
使用上述权限配置时,应为您创建跨帐户访问权限。
文档:
创建流 - http://docs.aws.amazon.com/cli/latest/reference/firehose/create-delivery-stream.html
记录 - http://docs.aws.amazon.com/cli/latest/reference/firehose/put-record.html
答案 1 :(得分:1)
没有
Amazon Kinesis Firehose只会输出到同一地区的Amazon S3存储桶和Amazon Redshift群集。
但是,只需调用相应的端点即可向Kinesis Firehose 发送任何信息。因此,您可以在任何AWS账户和任何地区(或Internet上的任何地方)将应用程序发送到Firehose,然后将其存储在与Firehose相同的区域中的存储桶或集群中。