โŒ

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Why should i use aws firelense? When i want to send logs to elastic search endpoint?

So i was new to ECS Fargate. I was trying to send logs from ECS Fargate application to elastic search endpoint. Here everyone seems to be using Aws Firelense with fluentbit image of aws. We already had filebeat configured where we were previously running our application in EC2 instance. But from ECS fargate seems we can't use filebeat. I was not able to find any docs to refer to. Just wanted to know if its even possible.

Also do i need to use firelense if i use filebeat? Currently it seems firelense only supports fluentbit and fluenID.

I was using below task definition but it was not ingesting logs.

        {
        "family": "fargate-poc",
        "containerDefinitions": [
            {
                "name": "cservice",
                "image": "******.dkr.ecr.us-east-1.amazonaws.com/service:2b1bb47",
                "cpu": 512,
                "portMappings": [
                    {
                        "name": "service-8080-tcp",
                        "containerPort": 8080,
                        "hostPort": 8080,
                        "protocol": "tcp"
                    }
                ],
                "essential": true,
                "environment": [
                    {
                        "name": "name_env",
                        "value": "egggggrgggggf"
                    },
                    {
                        "name": "JAVA_OPTS",
                        "value": "-XshowSettings:vm -Xmx1g -Xms1g"
                    },
                    {
                        "name": "SPRING_PROFILES_ACTIVE",
                        "value": "gggggg"
                    }
                ],
                "mountPoints": [
                    {
                        "sourceVolume": "logs",
                        "containerPath": "/srv/wps-*/logs"
                    }
                ],
                "volumesFrom": [],
                "logConfiguration": {
                    "logDriver": "awslogs",
                    "options": {
                        "awslogs-create-group": "true",
                        "awslogs-group": "/ecs/service-poc",
                        "awslogs-region": "us-east-1",
                        "awslogs-stream-prefix": "service"
                    },
                    "secretOptions": []
                }
            },
            {
                "name": "filebeat",
                "image": "*******.dkr.ecr.us-east-1.amazonaws.com/filebeat-non-prod:latest",
                "cpu": 256,
                "memory": 256,
                "portMappings": [],
                "essential": true,
                "environment": [],
"command": [
                "/bin/bash",
                "-c",
                "aws s3 cp s3://ilebeat/filebeat-fargate.yml /etc/filebeat/filebeat.yml && filebeat -e -c /etc/filebeat/filebeat.yml"
            ],
                "mountPoints": [
                    {
                        "sourceVolume": "logs",
                        "containerPath": "/usr/share/filebeat/logs"
                    }
                ],
                "volumesFrom": [
                    {
                        "sourceContainer": "service",
                        "readOnly": false
                    }
                ],
                "logConfiguration": {
                    "logDriver": "awslogs",
                    "options": {
                        "awslogs-create-group": "true",
                        "awslogs-group": "/ecs/service-poc",
                        "awslogs-region": "us-east-1",
                        "awslogs-stream-prefix": "filebeat"
                    },
                    "secretOptions": []
                }
            }
        ],
        "taskRoleArn": "arn:aws:iam::******:role/fargate-poc-task-role",
        "executionRoleArn": "arn:aws:iam::****:role/fargate-poc-task-role",
        "networkMode": "awsvpc",
        "volumes": [
            {
                "name": "logs",
                "host": {}
            }
        ],
        "requiresCompatibilities": [
            "FARGATE"
        ],
        "cpu": "1024",
        "memory": "2048"
    }

Thanks

AWS Fargate ECS - Can I use one container for multiple tasks?

I have a frontend application and backend which processes data and calculates metrics based on user input from frontend. The backend is dockerized and every time user wants to compute something a new task is created with overriding container parameters. It happens that there are 10 users computing something, so 10 tasks are spawned on same container. I am not sure if I can do it? Do the tasks share the container or they always run it separately? The computation can take even 2 hours so I cannot use lambda. On documentation I read that the task should not share resources. But it is not really transparent. Thank you!

I tried multiple different architectures. This seems to be fastest, I am just worried if I can use one container like this on multiple tasks.

Cannot access instance metadata from within a FARGATE Task

I have an AWS FARGATE Task that is running a relatively simple python application (with a Docker image built from python:3.6-stretch .) It runs fine using Amazon EC2 Tasks (Where an EC2 host provides the docker container); but I'm trying to move these to FARGATE.

When I deploy my images in Fargate and they attempt to get the local IPv4 data using the URL:

'http://169.254.169.254/latest/meta-data/local-ipv4'

I get the error:

HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /latest/meta-data/local-ipv4 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f086aa8d438>: Failed to establish a new connection: [Errno 22] Invalid argument',))

As a side note, my FARGATE container here is sitting on a private subnet ( there is a nat gateway configured and the instance(s) can get out to the internet) . the IP space is 10.160.16.0/20 .

The image is based on the python:3.6-stretch docker image.

Is there something I need to do to get a FARGATE task to be able to access the link-local address?

TIA!

Can't connect to fargate task which Executes command even though all permissions are set

I'm having trouble connecting to a fargate container with the ECS Execute command and it gives out the following error:

An error occurred (TargetNotConnectedException) when calling the ExecuteCommand operation: The execute command failed due to an internal error. Try again later.

I've made sure I have the right permissions and setup by using ecs-checker and I'm connecting to it using the following command:

aws ecs execute-command --cluster {cluster-name} --task {task_id} --container {container name} --interactive --command "/bin/bash".

I've noticed that this can usually happen when you don't have the necessary permissions but as I've pointed out above I've already checked with the ecs-checker.sh and here is the output from it:

-------------------------------------------------------------
Prerequisites for the AWS CLI to use ECS Exec
-------------------------------------------------------------
  AWS CLI Version        | OK (aws-cli/2.13.4 Python/3.11.4 Darwin/22.4.0 source/arm64 prompt/off)
  Session Manager Plugin | OK (1.2.463.0)

-------------------------------------------------------------
Checks on ECS task and other resources
-------------------------------------------------------------
Region : eu-west-2
Cluster: cluster
Task   : 47e51750712a4e1c832dd996c878f38a
-------------------------------------------------------------
  Cluster Configuration  | Audit Logging Not Configured
  Can I ExecuteCommand?  | arn:aws:iam::290319421751:role/aws-reserved/sso.amazonaws.com/eu-west-2/AWSReservedSSO_PowerUserAccess_01a9cfdb5ba4af7f
     ecs:ExecuteCommand: allowed
     ssm:StartSession denied?: allowed
  Task Status            | RUNNING
  Launch Type            | Fargate
  Platform Version       | 1.4.0
  Exec Enabled for Task  | OK
  Container-Level Checks |
    ----------
      Managed Agent Status
    ----------
         1. RUNNING for "WebApp"
    ----------
      Init Process Enabled (WebAppTaskDefinition:49)
    ----------
         1. Enabled - "WebApp"
    ----------
      Read-Only Root Filesystem (WebAppTaskDefinition:49)
    ----------
         1. Disabled - "WebApp"
  Task Role Permissions  | arn:aws:iam::290319421751:role/task-role
     ssmmessages:CreateControlChannel: allowed
     ssmmessages:CreateDataChannel: allowed
     ssmmessages:OpenControlChannel: allowed
     ssmmessages:OpenDataChannel: allowed
  VPC Endpoints          |
    Found existing endpoints for vpc-11122233444:
      - com.amazonaws.eu-west-2.monitoring
      - com.amazonaws.eu-west-2.ssmmessages
  Environment Variables  | (WebAppTaskDefinition:49)
       1. container "WebApp"
       - AWS_ACCESS_KEY: not defined
       - AWS_ACCESS_KEY_ID: not defined
       - AWS_SECRET_ACCESS_KEY: not defined

What is weird about this situation is that there are 4 environments that the service is deployed to and it works on all of them except on one of them. And they are all the same resources deployed since the clusters are created through a cloudformation template. The image deployed is also the same in all 4 environments.

Any ideas on what could cause this?

Why is my ECS cluster with AutoScaling group as capacity provider not working?

No Container Instances were found in your capacity provider

I want to use an autoscaling group as capacity provider for a ECS Cluster. Even tho I just want one container per container instance I chose the awsvpc as the network mode of my task definition. In other templates I create the autoscaling group with a launch template (in private subnets with NAT), a load balancer and a target group.

  • I chose 'ip' as target type in TargetGroup because of the awsvpc mode in my task definition,

  • of course, target group is NOT associated with my autoscaling group,

  • I'm using an ECS-optimized AMI,

  • I haven't added userdata to my launch template

Still, when I try to create my service in the cluster an error shows: 'No Container Instances were found in your capacity provider'

What could it be? I'm not sure if it. was to do with policies, roles and stuff

I've read that some people may add userdata to the launch template but I'm not sure that's a solution for me. I want an autoscaling group as a capacity provider, not a single server.

โŒ
โŒ