โŒ

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Elastic Bean Stalk API's issue

I have my environment running on elastic bean stalk and its deployed sucessfully also have enabled https on the environment and link is working fine but the issue is: I have my python backend hosted using elastic bean stalk but when accessing the API's only the root i.e. / is working rest all the API's i.e. /get_details for example is not working.

Any help would be appreciated, Thanks.

Tried testing on local, checked DynamoDB settings etc, its all good.

How does S3 handle deletions with AWS SDK delete_objects compared to individual delete_object calls?

I'm currently optimizing a data purging task in my application, which involves deleting multiple files stored in AWS S3. I understand that delete_objects can handle up to 1000 keys per request, which suggests that it might use batch processing to enhance performance.

However, I haven't found explicit documentation or proofs confirming whether delete_objects actually performs faster than individual delete_object calls for the same number of files. Specifically, I'm interested in understanding:

  1. Whether delete_objects uses parallel processing or other efficiencies that significantly reduce the operation time compared to multiple single delete_object calls.
  2. How the operation time might scale when deleting different numbers of keys (e.g., 100 vs. 1000 keys).

Does anyone have benchmarks, insights from AWS documentation, or personal experiences that could clarify how delete_objects optimizes these deletions?

how can i deploy multiple endpoints to single lambda using serverless

functions: ptDevices: name: ptDevices handler: dist/devices/serverless.handler timeout: 15 architecture: arm64 events: - http: method: any path: /devices - http: method: any path: /devices/connections - http: method: any path: /devices/contexts - http: method: any path: /devices/datetodate

how can i add all paths in one lambda and get exact endpoint path for each trigger respectively

when i run this i'm getting:

/devices

  • any /connections
    • any
      /contexts
    • any /datetodate
    • any

but in lambda there is only one trigger for all (Trigger:https://qw735635.execute-api.us-east-2.amazonaws.com/dev/devices/contexts)

what i want:

/devices

means all 4 triggers in one lambda function

I can't get my sam local start-api to work with Typescript

I can't for the life of me understand how I would set up SAM to run my Typescript/nodejs api locally.

I run sam local start-api and everything kind of starts up. Lots and lots of debug logging that doesn't tell me anything. Once I try to invoke the API endpoint with CURL I get the following error message:

26 Apr 2024 00:43:22,108 [ERROR] (rapid) Init failed error=Runtime exited with error: exit status 129 InvokeID=
26 Apr 2024 00:43:22,109 [ERROR] (rapid) Invoke failed error=Runtime exited with error: exit status 129 InvokeID=3506a416-5cd1-42a3-ba56-45ffa76d2369
26 Apr 2024 00:43:22,110 [ERROR] (rapid) Invoke DONE failed: Sandbox.Failure

Here's an example of how I set up a lambda function in the SAM template yaml:

  GetAuthenticatedUserDetailsFunction:
      CodeUri: ../
      Handler: GetAuthenticatedUserDetails.lambdaHandler
      Events:
        GetAuthenticatedUserDetails:
          Type: Api 
          Properties:
            RestApiId: !Ref api
            Path: /gaud
            Method: get
      Policies:
        - Version: '2012-10-17' 
          Statement:
            - Effect: Allow
              Action: "rds-db:connect"
              Resource: !Sub 'arn:aws:rds-db:${AWS::Region}:${AWS::AccountId}:dbuser:${DBProxyPRXID}/*'
    Metadata:
      BuildMethod: esbuild
      BuildProperties:
        Minify: true
        Target: es2020
        Sourcemap: true
        EntryPoints:
        - src/GetAuthenticatedUserDetails.ts

What am I doing wrong? I thought the sam cli would be smart enough to figure all this out by itself?

Here are some things that I already know

  • Entrypoints attribute contains the "src" directory. If I move that dir to the CodeUri attribute then sam build fails by saying that esbuild doesn't exist on my machine (but magically does exist with this config)
  • The sam build command transpiles all the .ts code to .js dirs in the ./aws-sam directory. But! It has a different structure where the code is compiled per function.

I understand that the error points to the fact that sam local can't find the function in whatever directory it is looking in. The Typescript code is transpiled into a .aws-sam/ directory that has a completely different structure (I assume it's self contained and will be zipped up for deployment).

I did try to hack the CodeUri attribute to point to the .aws-sam directory

How can you view any file uploaded in aws s3 bucket thru an url in an web application

i am developing a preview function where i want to view objects from my s3 aws bucket. i get this url: https://[BUCKET NAME].s3.us-east-2.amazonaws.com/[FILE NAME].[EXTENTION] and when i open it the file automatically starts to download whereas i just want to view the file not download it as an attachment.

i have tried google viewer url: https://docs.google.com/viewerng/viewer?url=https://[BUCKET NAME].s3.us-east-2.amazonaws.com/[FILE NAME].[EXTENTION]&chrome=false . it works fine but i was asked specifically to find a way to view thru aws url as the google viewer at times is not loading fast and also leaves the screen blank for a long time.

Getting "No AWS accounts are available to you" when using aws configure sso

I set up a user in AWS IAM Identity Center. When I run aws configure sso, I enter the session name, start url, region, and registration scope (it defaulted to sso:account:access so I just use that) and then I log in the browser with my user login, and after login success, I get "No AWS accounts are available to you."

I notice after login, a file is created at /Users/[user]/.aws/sso/cache

I am trying to authenticate so that I can use "aws ecr get-login-password" to push a docker file to aws container registry.

UPDATE: Watching this video cleared up how to manage accounts in IAM Identity Center: https://www.youtube.com/watch?v=_KhrGFV_Npw

Now I can successfully log in using "aws configure sso" from the command line.

However, after logging in, when I try to run this command to push a container to the AWS container registry: "aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin [id].dkr.ecr.us-east-1.amazonaws.com"

I am getting this error: Unable to locate credentials. You can configure credentials by running "aws configure".

How to pass parameters from an AWS event to a lambda?

Say I want to run a lambda via some trigger, cloudwatch event in this case. Is it possible to pass a parameter to this lambda via cloudwatch event trigger?

My lambda code is in Python and depending on a command line argument. It processes the data, it works locally but if I made this to a lambda and hook it up to an cloudwatch event, how can I pass those parameters that I pass via commandline?

I can run this on an ec2 instance as well, like a fargate task. How can I pass those arguments when this code is triggered via an event?

import boto3

def lambda_handler(event, context):
    aws_client = boto3.client('s3')

    # Get our parameter
    // read parameter 
   ...

How to run a java application on AWS ec2 instance

I made a simple Hello World java app, and I'm trying to run it amazon web serves AWS by making a virtual machine and launching a ec2 instance.

This is my java program HelloWorld.java

package demo;

public class HelloWorld {
    public static void main(String[] args) {
        System.out.println("Hello World and welcome!");

    }
}

I've installed java.

[ec2-user@ip-###-##-##-## ~]$ java -version
openjdk version "22" 2024-03-19
OpenJDK Runtime Environment Corretto-22.0.0.37.1 (build 22+37-FR)
OpenJDK 64-Bit Server VM Corretto-22.0.0.37.1 (build 22+37-FR, mixed mode, sharing)

I made sure that I have a javac version

[root@ip-###-##-##-## ec2-user]# javac -version
javac 22

But if I try to run it, I get

[root@ip-172-31-20-12 ec2-user]# javac HelloWorld.java
error: file not found: HelloWorld.java
Usage: javac <options> <source files>
use --help for a list of possible options

Been at this for some time but I don't know why it can't find my file.

AWS X-ray Sending segment batch failed and NoCredentialProviders

aws xray daemon install in my linux ubuntu local machine, i configure aws accesskey and secrectkey ~/.aws(aws configure). I use a sample Node.js application and install npm aws-xray-sdk. I also create an IAM role permission XRay full access policy and trust relationship.

{"Version": "2012-10-17","Statement": [{"Effect": "Allow","Principal": {"AWS": "arn:aws:iam::000000000:root"},"Action": "sts:AssumeRole","Condition": {}}]}

and my x-ray daemon configuration file is below. enter image description here

index.js file enter image description here

traces done but aws xray console any data did not received I got an error in my log file.

enter image description here

how do i fix it any mistake if know please let me know ย  ย  ,

in my local machine traces sending to aws xray console

nextjs image upload AWS issue

import { connectDB } from "../../../../lib/mongoose";
import { User, userValidation, loginValidation } from "../../../../models/User";
import { BadRequestError } from "../../../../lib/ErrorHandler";
import asyncHandler from "../../../../lib/asyncHandler";
import { NextRequest, NextResponse } from "next/server";
import { S3 } from "@aws-sdk/client-s3";

const s3 = new S3({
  region: process.env.AWS_REGION,
});

export const POST = asyncHandler(async (req: NextRequest) => {
  await connectDB(process.env.MONGODB_URI);
  const data = await req.json();
  const { email, firstName, lastName, password, image } = data;
  console.log('from routes', data);
  const { error } = userValidation.validate(data);
  if (error) throw new BadRequestError(error.details[0].message);

  const isEmailExist = await User.findOne({ email });
  if (isEmailExist) {
    throw new BadRequestError("email is already exist");
  }

  //   if (!image) throw new BadRequestError("Image is required.");
  //   if (!["image/jpeg", "image/png"].includes(image.type.toLowerCase())) {
  //     throw new BadRequestError(
  //       "Invalid image type. Only JPEG and PNG are allowed."
  //     );
  //   }
  //   if (image.size > 1024 * 1024 * 5) {
  //     throw new BadRequestError("Image size should not exceed 5MB.");
  //   }
  const extension = image.name.split(".").pop();
  const fileName = `${firstName}.${lastName}.${extension}`;
  const bufferedImage = await image.arrayBuffer();
  await s3.putObject({
    Bucket: process.env.AWS_BUCKET,
    Key: fileName,
    Body: Buffer.from(bufferedImage),
    ContentType: image.type,
  });

  const user = new User({
    firstName,
    lastName,
    email,
    password,
    image: fileName,
  });

  await user.save();

  return NextResponse.json(
    {
      message: `Success Signing Up!`,
      success: true,
      user,
    },
    { status: 201 }
  );
});
"use server";
import { customFetch } from "./customFetch";

export const registerNewUser = async (prevState: any, formData: FormData) => {
  const user = {
    firstName: formData.get("firstName") as string,
    lastName: formData.get("lastName") as string,
    password: formData.get("password") as string,
    confirmPassword: formData.get("confirmPassword") as string,
    email: formData.get("email") as string,
    image: formData.get("image"),
  };
  console.log("from actions", user);
  try {
    const { data } = await customFetch.post("/api/auth/sign-up", user);
    return data;
  } catch (error: any) {
    console.log(error.response.data.error);
    return { message: error.response.data.error };
  }
};

Im trying to upload an image in the sign-up form to AWS, but I cant do any action on the image in my route, this is the console logs :

from actions {
  firstName: 'Gal',
  lastName: 'Parselany',
  password: '123456',
  confirmPassword: '123456',
  email: '[email protected]',
  image: File {
    size: 201120,
    type: 'image/jpeg',
    name: 'style4.jpeg',
    lastModified: 1713265707481
  }
}

from action { 
  firstName: 'Gal',
  lastName: 'Parselany',
  password: '123456',
  confirmPassword: '123456',
  email: '[email protected]',
  image: {}
}

as you can see,I can see the image object in my actions, but in my route.ts i dont see it at all, I want to add the image to my user model, but right now i dont have anything wrote in my image prop as you can see can someOne help me solve it? thanks in advance

Is it possible to obtain the DockerFile (or equivalent) for SageMaker's prebuilt images (e.g. one of their sklearn containers)?

I am fairly new to Docker. I usually use SageMaker's prebuilt container images when deploying an endpoint for inference (e.g. with scikit-learn). I am interested in seeing the DockerFile (e.g. "source code") that AWS uses to create their prebuilt images, so I can use this as both a learning tool and a template for creating my own custom containers for use with a SageMaker endpoint.

For example, if I wanted to deploy a version of an ML framework that is not yet available in a prebuilt image, or if I want to have two frameworks installed and running simultaneously in a single container (e.g. both XGBoost and Sklearn). I know I can design custom containers from scratch, but I don't really want to change much from what is already on offer; just install a couple of extra python packages and maybe upgrade the python version.

Alternatively, is it possible to issue a "pip install" command to simply install or update python versions / packages when you initially deploy a SageMaker endpoint.

Link godaddy domain with aws and host wordpress site

I have a GoDaddy domain which I want to host in AWS, now after a lot of googling I am mostly getting R53 and LightSail as best options. So, i wanted to know which will be the best option ? My site is a simple WordPress portfolio site which is currently in my localhost, now I want to host it. which will be most affordable and easy to use. Another question is, there is already one ec2 instance running so if i use R53 will it use the same ec2 instance to host my WordPress site ? (Do not recommend other shared hosting as my clients requirement is aws)

AWS EC2 MySQL Client Installation

I am installing MySQL client in my EC2 instance via Cloudformation using the below command in userdata:

- yum install -y https://dev.mysql.com/get/mysql80-community-release-el7-1.noarch.rpm
- yum install -y mysql-community-client.x86_64

But after logging-in to the EC2, I found that I cannot execute certain commands like

mysqldump
mysql

I checked in /usr/bin and found that mysql, mysqldump, mysqlcheck and other mysql related commands are not there.

At last I checked the system log and cloud-init log of the EC2 server and found below log:

[   75.069708] cloud-init[2691]: Installing:
[   75.071941] cloud-init[2691]: mysql-community-client         x86_64 8.0.28-1.el7     mysql80-community  53 M
[   75.076458] cloud-init[2691]: mysql-community-libs           x86_64 8.0.28-1.el7     mysql80-community 4.7 M
[   75.096330] cloud-init[2691]: replacing  mariadb-libs.x86_64 1:5.5.68-1.amzn2
[   75.099727] cloud-init[2691]: mysql-community-libs-compat    x86_64 8.0.28-1.el7     mysql80-community 1.2 M
[   75.104188] cloud-init[2691]: replacing  mariadb-libs.x86_64 1:5.5.68-1.amzn2
[   75.107629] cloud-init[2691]: Installing for dependencies:
[   75.110473] cloud-init[2691]: mysql-community-client-plugins x86_64 8.0.28-1.el7     mysql80-community 5.7 M
[   75.116311] cloud-init[2691]: mysql-community-common         x86_64 8.0.28-1.el7     mysql80-community 630 k
[   75.116510] cloud-init[2691]: ncurses-compat-libs            x86_64 6.0-8.20170212.amzn2.1.3
[   75.116976] cloud-init[2691]: amzn2-core        308 k
[   75.117416] cloud-init[2691]: Transaction Summary
[   75.117888] cloud-init[2691]: ================================================================================
[   75.118345] cloud-init[2691]: Install  3 Packages (+3 Dependent packages)
[   75.118801] cloud-init[2691]: Total download size: 65 M
[   75.119227] cloud-init[2691]: Downloading packages:
[   75.393593] cloud-init[2691]: warning: /var/cache/yum/x86_64/2/mysql80-community/packages/mysql-community-client-plugins-8.0.28-1.el7.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID 3a79bd29: NOKEY
[   75.399647] cloud-init[2691]: Public key for mysql-community-client-plugins-8.0.28-1.el7.x86_64.rpm is not installed
[   76.034754] cloud-init[2691]: --------------------------------------------------------------------------------
[   76.039388] cloud-init[2691]: Total                                               67 MB/s |  65 MB  00:00
[   76.043679] cloud-init[2691]: Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
[   76.075984] cloud-init[2691]: Importing GPG key 0x5072E1F5:
[   76.078955] cloud-init[2691]: Userid     : "MySQL Release Engineering <[email protected]>"
[   76.082649] cloud-init[2691]: Fingerprint: a4a9 4068 76fc bd3c 4567 70c8 8c71 8d3b 5072 e1f5
[   76.085875] cloud-init[2691]: Package    : mysql80-community-release-el7-1.noarch (installed)
[   76.089472] cloud-init[2691]: From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
[   76.093229] cloud-init[2691]: Public key for mysql-community-client-8.0.28-1.el7.x86_64.rpm is not installed
[   76.097724] cloud-init[2691]: Failing package is: mysql-community-client-8.0.28-1.el7.x86_64
[   76.101487] cloud-init[2691]: GPG Keys are configured as: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql

How I can resolve this issue? Please suggest.

Update: The instance is: Amazon Linux 2

User Data:

Fn::Base64: !Sub |
                #cloud-config
                repo_update: true

                write_files:
                  - content: |
                        REGION=${Region}
                        ENV=${Env}
                    path: /etc/environment
                    append: true
                  - content: "${EFSFileSystem}:/ /efs efs defaults,_netdev 0 0"
                    path: /etc/fstab
                    append: true

                packages:
                  - amazon-efs-utils
                  - jq
                  - nfs-utils
                  - ruby
                  - unzip
                  - wget
                package_update: true
                package_upgrade: true

                runcmd:
                  - [ mkdir, /efs ]
                  - [ mount, /efs ]
                  - [ sh, -c, "amazon-linux-extras install -y nginx1.12 php7.4" ]
                  - yum install -y php-opcache php-gd php-mbstring php-pecl-zip php-xml
                  - yum install -y https://dev.mysql.com/get/mysql80-community-release-el7-1.noarch.rpm
                  - yum install -y mysql-community-client.x86_64
                  - wget -q https://s3.amazonaws.com/amazoncloudwatch-agent/linux/amd64/latest/AmazonCloudWatchAgent.zip -O /tmp/AmazonCloudWatchAgent.zip
                  - unzip -d /tmp/AmazonCloudWatchAgentInstaller /tmp/AmazonCloudWatchAgent.zip
                  - rpm -ivh /tmp/AmazonCloudWatchAgentInstaller/amazon-cloudwatch-agent.rpm
                  - /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c ssm:${CloudwatchConfigSsm} -s
                  - wget -q https://aws-codedeploy-ap-northeast-1.s3.ap-northeast-1.amazonaws.com/latest/install -O /tmp/aws-codedeploy-install.rb
                  - [ ruby, /tmp/aws-codedeploy-install.rb, auto ]
                  - systemctl enable nginx
                  - service codedeploy-agent start

AWS API Gateway {"message":"Missing Authentication Token"}

I am using API Gateway to build a REST API to communicate with a deployed aws sagemaker model via aws lambda. When I test the Method (Method Test Results) my lambda function returns the required results. I've definitely deployed the API and I'm using the correct invoke URL with the resource name appended (Method Invoke URL). Finally I have checked all the auth settings for this method request (Method Auth Settings). When I input the invoke URL into the browser or try to call the REST API (from cloud9 IDE -- a web app I am developing) I get this error: {"message":"Missing Authentication Token"} (URL Response)

My API is very simple, only one POST request, it does not contain any other resources or methods. I tried also setting up the method under '/' but had the same issue.

There are a lot of people out there with this issue, and I've spent a while reading through similar posts - but the solutions boil down to the issues I've checked above. If anyone could help it would be greatly appreciated!

Iain

Error with StreamIdentifier when using MultiStreamTracker in kinesis

I'm getting an error with StreamIdentifier when trying to use MultiStreamTracker in a kinesis consumer application.

java.lang.IllegalArgumentException: Unable to deserialize StreamIdentifier from first-stream-name

What is causing this error? I can't find a good example of using the tracker with kinesis.

The stream name works when using a consumer with a single stream so I'm not sure what is happening. It looks like the consumer is trying to parse the accountId and streamCreationEpoch. But when I create the identifiers I am using the singleStreamInstance method. Is the stream name required to have these values? They appear to be optional from the code.

This test is part of a complete example on github.

package kinesis.localstack.example;

import java.nio.ByteBuffer;
import java.nio.charset.StandardCharsets;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.UUID;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;

import com.amazonaws.services.kinesis.producer.KinesisProducer;
import com.amazonaws.services.kinesis.producer.KinesisProducerConfiguration;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.testcontainers.containers.localstack.LocalStackContainer;
import org.testcontainers.junit.jupiter.Container;
import org.testcontainers.junit.jupiter.Testcontainers;
import org.testcontainers.utility.DockerImageName;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.cloudwatch.CloudWatchAsyncClient;
import software.amazon.awssdk.services.dynamodb.DynamoDbAsyncClient;
import software.amazon.awssdk.services.kinesis.KinesisAsyncClient;
import software.amazon.kinesis.common.ConfigsBuilder;
import software.amazon.kinesis.common.InitialPositionInStream;
import software.amazon.kinesis.common.InitialPositionInStreamExtended;
import software.amazon.kinesis.common.KinesisClientUtil;
import software.amazon.kinesis.common.StreamConfig;
import software.amazon.kinesis.common.StreamIdentifier;
import software.amazon.kinesis.coordinator.Scheduler;
import software.amazon.kinesis.exceptions.InvalidStateException;
import software.amazon.kinesis.exceptions.ShutdownException;
import software.amazon.kinesis.lifecycle.events.InitializationInput;
import software.amazon.kinesis.lifecycle.events.LeaseLostInput;
import software.amazon.kinesis.lifecycle.events.ProcessRecordsInput;
import software.amazon.kinesis.lifecycle.events.ShardEndedInput;
import software.amazon.kinesis.lifecycle.events.ShutdownRequestedInput;
import software.amazon.kinesis.processor.FormerStreamsLeasesDeletionStrategy;
import software.amazon.kinesis.processor.FormerStreamsLeasesDeletionStrategy.NoLeaseDeletionStrategy;
import software.amazon.kinesis.processor.MultiStreamTracker;
import software.amazon.kinesis.processor.ShardRecordProcessor;
import software.amazon.kinesis.processor.ShardRecordProcessorFactory;
import software.amazon.kinesis.retrieval.KinesisClientRecord;
import software.amazon.kinesis.retrieval.polling.PollingConfig;

import static java.util.stream.Collectors.toList;
import static org.assertj.core.api.Assertions.assertThat;
import static org.awaitility.Awaitility.await;
import static org.testcontainers.containers.localstack.LocalStackContainer.Service.CLOUDWATCH;
import static org.testcontainers.containers.localstack.LocalStackContainer.Service.DYNAMODB;
import static org.testcontainers.containers.localstack.LocalStackContainer.Service.KINESIS;
import static software.amazon.kinesis.common.InitialPositionInStream.TRIM_HORIZON;
import static software.amazon.kinesis.common.StreamIdentifier.singleStreamInstance;

@Testcontainers
public class KinesisMultiStreamTest {
    static class TestProcessorFactory implements ShardRecordProcessorFactory {

        private final TestKinesisRecordService service;

        public TestProcessorFactory(TestKinesisRecordService service) {
            this.service = service;
        }

        @Override
        public ShardRecordProcessor shardRecordProcessor() {
            throw new UnsupportedOperationException("must have streamIdentifier");
        }

        public ShardRecordProcessor shardRecordProcessor(StreamIdentifier streamIdentifier) {
            return new TestRecordProcessor(service, streamIdentifier);
        }
    }

    static class TestRecordProcessor implements ShardRecordProcessor {

        public final TestKinesisRecordService service;
        public final StreamIdentifier streamIdentifier;

        public TestRecordProcessor(TestKinesisRecordService service, StreamIdentifier streamIdentifier) {
            this.service = service;
            this.streamIdentifier = streamIdentifier;
        }

        @Override
        public void initialize(InitializationInput initializationInput) {

        }

        @Override
        public void processRecords(ProcessRecordsInput processRecordsInput) {
            service.addRecord(streamIdentifier, processRecordsInput);
        }

        @Override
        public void leaseLost(LeaseLostInput leaseLostInput) {

        }

        @Override
        public void shardEnded(ShardEndedInput shardEndedInput) {
            try {
                shardEndedInput.checkpointer().checkpoint();
            } catch (Exception e) {
                throw new IllegalStateException(e);
            }
        }

        @Override
        public void shutdownRequested(ShutdownRequestedInput shutdownRequestedInput) {

        }
    }

    static class TestKinesisRecordService {
        private List<ProcessRecordsInput> firstStreamRecords = Collections.synchronizedList(new ArrayList<>());
        private List<ProcessRecordsInput> secondStreamRecords = Collections.synchronizedList(new ArrayList<>());

        public void addRecord(StreamIdentifier streamIdentifier, ProcessRecordsInput processRecordsInput) {
            if(streamIdentifier.streamName().contains(firstStreamName)) {
                firstStreamRecords.add(processRecordsInput);
            } else if(streamIdentifier.streamName().contains(secondStreamName)) {
                secondStreamRecords.add(processRecordsInput);
            } else {
                throw new IllegalStateException("no list for stream " + streamIdentifier);
            }
        }

        public List<ProcessRecordsInput> getFirstStreamRecords() {
            return Collections.unmodifiableList(firstStreamRecords);
        }

        public List<ProcessRecordsInput> getSecondStreamRecords() {
            return Collections.unmodifiableList(secondStreamRecords);
        }
    }

    public static final String firstStreamName = "first-stream-name";
    public static final String secondStreamName = "second-stream-name";
    public static final String partitionKey = "partition-key";

    DockerImageName localstackImage = DockerImageName.parse("localstack/localstack:latest");

    @Container
    public LocalStackContainer localstack = new LocalStackContainer(localstackImage)
            .withServices(KINESIS, CLOUDWATCH)
            .withEnv("KINESIS_INITIALIZE_STREAMS", firstStreamName + ":1," + secondStreamName + ":1");

    public Scheduler scheduler;
    public TestKinesisRecordService service = new TestKinesisRecordService();
    public KinesisProducer producer;

    @BeforeEach
    void setup() {
        KinesisAsyncClient kinesisClient = KinesisClientUtil.createKinesisAsyncClient(
                KinesisAsyncClient.builder().endpointOverride(localstack.getEndpointOverride(KINESIS)).region(Region.of(localstack.getRegion()))
        );
        DynamoDbAsyncClient dynamoClient = DynamoDbAsyncClient.builder().region(Region.of(localstack.getRegion())).endpointOverride(localstack.getEndpointOverride(DYNAMODB)).build();
        CloudWatchAsyncClient cloudWatchClient = CloudWatchAsyncClient.builder().region(Region.of(localstack.getRegion())).endpointOverride(localstack.getEndpointOverride(CLOUDWATCH)).build();

        MultiStreamTracker tracker = new MultiStreamTracker() {

            private List<StreamConfig> configs = List.of(
                    new StreamConfig(singleStreamInstance(firstStreamName), InitialPositionInStreamExtended.newInitialPosition(TRIM_HORIZON)),
                    new StreamConfig(singleStreamInstance(secondStreamName), InitialPositionInStreamExtended.newInitialPosition(TRIM_HORIZON)));
            @Override
            public List<StreamConfig> streamConfigList() {
                return configs;
            }

            @Override
            public FormerStreamsLeasesDeletionStrategy formerStreamsLeasesDeletionStrategy() {
                return new NoLeaseDeletionStrategy();
            }
        };

        ConfigsBuilder configsBuilder = new ConfigsBuilder(tracker, "KinesisPratTest", kinesisClient, dynamoClient, cloudWatchClient, UUID.randomUUID().toString(), new TestProcessorFactory(service));

        scheduler = new Scheduler(
                configsBuilder.checkpointConfig(),
                configsBuilder.coordinatorConfig(),
                configsBuilder.leaseManagementConfig(),
                configsBuilder.lifecycleConfig(),
                configsBuilder.metricsConfig(),
                configsBuilder.processorConfig().callProcessRecordsEvenForEmptyRecordList(false),
                configsBuilder.retrievalConfig()
        );

        new Thread(scheduler).start();

        producer = producer();
    }

    @AfterEach
    public void teardown() throws ExecutionException, InterruptedException, TimeoutException {
        producer.destroy();
        Future<Boolean> gracefulShutdownFuture = scheduler.startGracefulShutdown();
        gracefulShutdownFuture.get(60, TimeUnit.SECONDS);
    }

    public KinesisProducer producer() {
        var configuration = new KinesisProducerConfiguration()
                .setVerifyCertificate(false)
                .setCredentialsProvider(localstack.getDefaultCredentialsProvider())
                .setMetricsCredentialsProvider(localstack.getDefaultCredentialsProvider())
                .setRegion(localstack.getRegion())
                .setCloudwatchEndpoint(localstack.getEndpointOverride(CLOUDWATCH).getHost())
                .setCloudwatchPort(localstack.getEndpointOverride(CLOUDWATCH).getPort())
                .setKinesisEndpoint(localstack.getEndpointOverride(KINESIS).getHost())
                .setKinesisPort(localstack.getEndpointOverride(KINESIS).getPort());

        return new KinesisProducer(configuration);
    }

    @Test
    void testFirstStream() {
        String expected = "Hello";
        producer.addUserRecord(firstStreamName, partitionKey, ByteBuffer.wrap(expected.getBytes(StandardCharsets.UTF_8)));

        var result = await().timeout(600, TimeUnit.SECONDS)
                .until(() -> service.getFirstStreamRecords().stream()
                .flatMap(r -> r.records().stream())
                        .map(KinesisClientRecord::data)
                        .map(r -> StandardCharsets.UTF_8.decode(r).toString())
                .collect(toList()), records -> records.size() > 0);
        assertThat(result).anyMatch(r -> r.equals(expected));
    }

    @Test
    void testSecondStream() {
        String expected = "Hello";
        producer.addUserRecord(secondStreamName, partitionKey, ByteBuffer.wrap(expected.getBytes(StandardCharsets.UTF_8)));

        var result = await().timeout(600, TimeUnit.SECONDS)
                .until(() -> service.getSecondStreamRecords().stream()
                        .flatMap(r -> r.records().stream())
                        .map(KinesisClientRecord::data)
                        .map(r -> StandardCharsets.UTF_8.decode(r).toString())
                        .collect(toList()), records -> records.size() > 0);
        assertThat(result).anyMatch(r -> r.equals(expected));
    }
}

Here is the error I am getting.

[Thread-9] ERROR software.amazon.kinesis.coordinator.Scheduler - Worker.run caught exception, sleeping for 1000 milli seconds!
java.lang.IllegalArgumentException: Unable to deserialize StreamIdentifier from first-stream-name
    at software.amazon.kinesis.common.StreamIdentifier.multiStreamInstance(StreamIdentifier.java:75)
    at software.amazon.kinesis.coordinator.Scheduler.getStreamIdentifier(Scheduler.java:1001)
    at software.amazon.kinesis.coordinator.Scheduler.buildConsumer(Scheduler.java:917)
    at software.amazon.kinesis.coordinator.Scheduler.createOrGetShardConsumer(Scheduler.java:899)
    at software.amazon.kinesis.coordinator.Scheduler.runProcessLoop(Scheduler.java:419)
    at software.amazon.kinesis.coordinator.Scheduler.run(Scheduler.java:330)
    at java.base/java.lang.Thread.run(Thread.java:829)

Why does the information_schema.table_privileges in Redshift not support the truncate type?

I want to query the select, insert, update, delete, and truncate privileges for users on tables. However, the table_privileges view does not show the truncate privilege.

When I try to include the truncate type in the makeaclitem() function, an error occurs.

Is there an alternative?

type lang-sql

SELECT u_grantor.usename::information_schema.sql_identifier AS grantor, 
        grantee.name::information_schema.sql_identifier AS grantee, 
        current_database()::information_schema.sql_identifier AS table_catalog, 
        nc.nspname::information_schema.sql_identifier AS table_schema, 
        c.relname::information_schema.sql_identifier AS table_name, 
        pr."type"::information_schema.character_data AS privilege_type 
FROM pg_class c, pg_namespace nc, pg_user u_grantor,
(SELECT pg_user.usesysid, 0, pg_user.usename FROM pg_user ) grantee(usesysid, grosysid, name), 
(((((( SELECT 'SELECT'::character varying
        UNION ALL 
        SELECT 'DELETE'::character varying)
        UNION ALL 
        SELECT 'INSERT'::character varying)
        UNION ALL 
        SELECT 'UPDATE'::character varying)
        UNION ALL 
        SELECT 'REFERENCES'::character varying)
        UNION ALL 
        SELECT 'TRUNCATE'::character varying)
        UNION ALL 
        SELECT 'TRIGGER'::character varying) pr("type")
        WHERE c.relnamespace = nc.oid 
AND c.relkind = 'r'::"char"
AND aclcontains(c.relacl, makeaclitem(grantee.usesysid, grantee.grosysid, u_grantor.usesysid, pr."type"::text, false))

How to get an alarm when there are no logs for a time period in AWS Cloudwatch?

I have a Java application that runs in AWS Elastic Container Service. Application polls a queue periodically. Sometimes there is no response from the queue and the application hanging forever. I have enclosed the methods with try-catch blocks with logging exceptions. Even though there are no logs in the Cloudwatch after that. No exceptions or errors. Is there a way that I can identify this situation. ? (No logs in the Cloudwatch). Like filtering an error log pattern. So I can restart the service. Any trick or solution would be appreciated.

public void handleProcess() {
    try {
        while(true) {
            Response response = QueueUitils.pollQueue(); // poll the queue
            QueueUitils.processMessage(response);
            TimeUnit.SECONDS.sleep(WAIT_TIME); // WAIT_TIME = 20
        }
    } catch (Exception e) {
        LOGGER.error("Data Queue operation failed" + e.getMessage());
        throw e;
    }
}

Using AWS API HTTP Gateway with HTTP Backend without 301 redirection

I have a backend providing a HTTPS Endpoint. It response to a GET Request which is expecting two parameters in the query string part of its URL.

I configured a HTTP API Gateway with a GET method and a route using the above endpoint as the integration target (also with GET method). For the needed parameter I added parameter mapping with overwriting static values (just for testing).

When I open the "invoke URL" of the API route in a browser, it will receive a 301 response (moved permanently) to the original endpoint, including the two parameters with their configured static values.

Is there a way, to prevent the rerouting?

I do not want the API client to be aware of the real endpoint, to avoid invoking it without the API Gateway. Furthermore the client should not become aware of the static parameter values.

When I use an AWS REST API Gateway, I could configure the two parameters in the "Integration Request" as "URL query string parameters" with static values. Invoking the "Invoke URL" with the route, it returns the content of the real HTTP endpoint with a 200 OK response. Additionally the client doesn't get to know the real endpoint nor the static value for the two parameters. This is exactly the behavior I like to accomplish with the HTTP API Gateway - any chance to configure it to work that way, too?

Furthermore, if I use an AWS HTTP API Gateway but use a Lambda function as the integrated backend (regardless if the function has an own invocation URL or not), the API URL is not changed / forwarded either.

Which AWS Simple Email Service API is the latest

I am building an application using AWS SES, but it is not clear to me which version of the API I should be developing against.

Looking at the Amazon Simple Email Service Documentation I see both API and API v2 listed.

Logic would tell me to use v2 as that is a higher number, but at the same time the Developer Guide primarily references API (not API v2).

Similarly the Code examples section is much smaller for v2.

If I look at the .NET libraries, which is the SDK I would be using, it isn't much help either, and both versions have had updates pushed in the last 24 hours, and both are on version 3.10X.XX.

Is there any documentation from AWS that indicates the status of their SES SDKs and when particular versions are going to be deprecated? I would prefer not to start developing against a specific version only to find that support is ending for it in a short time.

Thanks

โŒ
โŒ