AWS Fargate任务-awslogs驱动程序-间歇性日志

时间:2019-01-08 02:47:11

标签: python docker python-asyncio amazon-ecs aiohttp

我正在运行一个一次性的Fargate任务,该任务运行一个小的python脚本。任务定义配置为使用awslogs将日志发送到Cloudwatch,但是我面临一个非常奇怪的间歇性问题。

日志有时会出现在新创建的Cloudwatch流中,有时却不会出现。我已经尝试删除部分代码,而现在,这就是我所拥有的。

当我删除asyncio / aiohttp提取逻辑时,打印语句通常显示在Cloudwatch日志中。尽管由于问题是断断续续的,但我不能100%肯定会一直发生这种情况。

但是,由于包含了获取逻辑,有时在Fargate任务退出后,我会得到完全为空的日志流。没有日志显示“作业开始”,“作业结束”或“将文件放入S3”。也没有错误日志。尽管如此,当我检查S3存储桶时,仍创建了具有相应时间戳的文件,表明脚本确实运行完毕。我无法理解这是怎么可能的。

dostuff.py

#!/usr/bin/env python3.6

import asyncio
import datetime
import time

from aiohttp import ClientSession
import boto3


def s3_put(bucket, key, body):
    try:
        print(f"Putting file into {bucket}/{key}")
        client = boto3.client("s3")
        client.put_object(Bucket=bucket,Key=key,Body=body)
    except Exception:
        print(f"Error putting object into S3 Bucket: {bucket}/{key}")
        raise


async def fetch(session, number):
    url = f'https://jsonplaceholder.typicode.com/todos/{number}'
    try:
        async with session.get(url) as response:
            return await response.json()
    except Exception as e:
        print(f"Failed to fetch {url}")
        print(e)
        return None


async def fetch_all():
    tasks = []
    async with ClientSession() as session:
        for x in range(1, 6):
            for number in range(1, 200):
                task = asyncio.ensure_future(fetch(session=session,number=number))
                tasks.append(task)
        responses = await asyncio.gather(*tasks)
    return responses


def main():
    try:
        loop = asyncio.get_event_loop()
        future = asyncio.ensure_future(fetch_all())
        responses = list(filter(None, loop.run_until_complete(future)))
    except Exception:
        print("uh oh")
        raise

    # do stuff with responses

    body = "whatever"
    key = f"{datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d_%H-%M-%S')}_test"
    s3_put(bucket="my-s3-bucket", key=key, body=body)


if __name__ == "__main__":
    print("Job starting")
    main()
    print("Job complete")

Dockerfile

FROM python:3.6-alpine
COPY docker/test_fargate_logging/requirements.txt /
COPY docker/test_fargate_logging/dostuff.py /
WORKDIR /
RUN pip install --upgrade pip && \
    pip install -r requirements.txt
ENTRYPOINT python dostuff.py

任务定义

{
    "ipcMode": null,
    "executionRoleArn": "arn:aws:iam::xxxxxxxxxxxx:role/ecsInstanceRole",
    "containerDefinitions": [
        {
            "dnsSearchDomains": null,
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "test-fargate-logging-stg-log-group",
                    "awslogs-region": "ap-northeast-1",
                    "awslogs-stream-prefix": "ecs"
                }
            },
            "entryPoint": null,
            "portMappings": [],
            "command": null,
            "linuxParameters": null,
            "cpu": 256,
            "environment": [],
            "ulimits": null,
            "dnsServers": null,
            "mountPoints": [],
            "workingDirectory": null,
            "secrets": null,
            "dockerSecurityOptions": null,
            "memory": 512,
            "memoryReservation": null,
            "volumesFrom": [],
            "image": "xxxxxxxxxxxx.dkr.ecr.ap-northeast-1.amazonaws.com/test-fargate-logging-stg-ecr-repository:xxxxxxxxxxxxxxxxxxxxxxxxxxxx",
            "disableNetworking": null,
            "interactive": null,
            "healthCheck": null,
            "essential": true,
            "links": null,
            "hostname": null,
            "extraHosts": null,
            "pseudoTerminal": null,
            "user": null,
            "readonlyRootFilesystem": null,
            "dockerLabels": null,
            "systemControls": null,
            "privileged": null,
            "name": "test_fargate_logging"
        }
    ],
    "placementConstraints": [],
    "memory": "512",
    "taskRoleArn": "arn:aws:iam::xxxxxxxxxxxx:role/ecsInstanceRole",
    "compatibilities": [
        "EC2",
        "FARGATE"
    ],
    "taskDefinitionArn": "arn:aws:ecs:ap-northeast-1:xxxxxxxxxxxx:task-definition/test-fargate-logging-stg-task-definition:2",
    "family": "test-fargate-logging-stg-task-definition",
    "requiresAttributes": [
        {
            "targetId": null,
            "targetType": null,
            "value": null,
            "name": "ecs.capability.execution-role-ecr-pull"
        },
        {
            "targetId": null,
            "targetType": null,
            "value": null,
            "name": "com.amazonaws.ecs.capability.docker-remote-api.1.18"
        },
        {
            "targetId": null,
            "targetType": null,
            "value": null,
            "name": "ecs.capability.task-eni"
        },
        {
            "targetId": null,
            "targetType": null,
            "value": null,
            "name": "com.amazonaws.ecs.capability.ecr-auth"
        },
        {
            "targetId": null,
            "targetType": null,
            "value": null,
            "name": "com.amazonaws.ecs.capability.task-iam-role"
        },
        {
            "targetId": null,
            "targetType": null,
            "value": null,
            "name": "ecs.capability.execution-role-awslogs"
        },
        {
            "targetId": null,
            "targetType": null,
            "value": null,
            "name": "com.amazonaws.ecs.capability.logging-driver.awslogs"
        },
        {
            "targetId": null,
            "targetType": null,
            "value": null,
            "name": "com.amazonaws.ecs.capability.docker-remote-api.1.19"
        }
    ],
    "pidMode": null,
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "networkMode": "awsvpc",
    "cpu": "256",
    "revision": 2,
    "status": "ACTIVE",
    "volumes": []
}

观察

  • 当我将任务数量(获取的网址)减少为10而不是〜1000时,日志似乎在大多数时间/全部(?)出现。同样,问题是断断续续的,所以我不能100%确定。
  • 我的原始脚本还具有其他逻辑,可用于重试失败时的提取以及解析故障排除时删除的逻辑。那时的日志记录行为至少具有“作业启动”日志和异步aiohttp请求期间的日志。但是,用于写入S3的日志和最终的“作业完成”日志是间歇性出现的。通过上面的简化脚本,我似乎可以获取所有日志,也可以获取任何日志。
  • Python的logging库也发生了问题,我更改为print以排除logging的问题

2 个答案:

答案 0 :(得分:0)

问题

我遇到了同样的问题;在CloudWatch中间歇性地丢失ECS Fargate任务的日志。

虽然我无法回答为什么会发生这种情况,但我可以提供一种经过测试的解决方法。

对我有用的东西:

升级到Python 3.7版本(遇到相同问题时,我看到的是使用3.6。

我现在正在查看所有日志,并从最新版本的Python中受益。

我希望这对您有所帮助。

答案 1 :(得分:0)

根据此AWS Forums链接,此问题现在似乎已解决 我遇到了类似的问题,并且在此问题的答案中有一些有用的解决方法:Missing log lines when writing to cloudwatch from ECS Docker containers

您不应再遇到此问题。如果是这样,请尝试部署任务定义的新版本,这应该可以对其进行修复。