โŒ

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Deploy Socket.IO, Express, Vue, Node.js using Docker

I am making a web application and planning to deploy it to AWS EC2 Instance using Docker (ECR -> ECS -> EC2).
In localhost, my Vue front-end is running on port 5173 and my back-end is running on port 5000.
This is how my app.ts in back end looks like:

import dotenv from 'dotenv';
dotenv.config();
import express from 'express';
import { createServer } from 'http';
import cors from 'cors';
import { Server } from 'socket.io';
import cookieParser from 'cookie-parser';
import path from 'path';

const app = express();
app.use(cors());
app.use(cookieParser());

const uri = process.env.MONGODB_URI!;
const port = process.env.PORT!;

app.use(express.static(path.join(__dirname, 'public')));
app.use(express.json());

app.get('*', (req, res) => {
    res.sendFile(path.join(__dirname, '../public/index.html'));
});

const server = createServer(app);

const io = new Server(server, {
    cors: {
        origin: 'http://localhost:5173',
        methods: ['GET', 'POST'],
        credentials: true,
    },
});

server.listen(port, () => {
    console.log(`Socket.io server is running on port ${port}`);
});

I have a socket.ts file beside the main.ts file in the front end which is used to connect the socket to the backend.

import { io } from 'socket.io-client';
const socket = io('http://localhost:5000');
export { socket };

I have set up the Dockerfile like this:

FROM node:latest as server
WORKDIR /srv/app
COPY ./server/package*.json ./
RUN npm install
COPY ./server .
RUN npm run build

FROM node:latest as client
WORKDIR /srv/app
COPY ./client/package*.json ./
RUN npm install
COPY ./client .
RUN npm run build

FROM node:latest as production
WORKDIR /srv/app
RUN mkdir /public
COPY --from=server /srv/app/dist ./
COPY --from=client /srv/app/dist ./public
ENV NODE_ENV=production
EXPOSE 5000
CMD ["node", "app.js"]

and compose.yml file:

services:
  app:
    container_name: app
    build: .
    env_file:
      - ./prod.env
    ports:
      - "5000:5000"
    expose:
      - 5000

How should I modify to get the socket.io working when deploy the docker image to AWS EC2 Instance?

How to run a java application on AWS ec2 instance

I made a simple Hello World java app, and I'm trying to run it amazon web serves AWS by making a virtual machine and launching a ec2 instance.

This is my java program HelloWorld.java

package demo;

public class HelloWorld {
    public static void main(String[] args) {
        System.out.println("Hello World and welcome!");

    }
}

I've installed java.

[ec2-user@ip-###-##-##-## ~]$ java -version
openjdk version "22" 2024-03-19
OpenJDK Runtime Environment Corretto-22.0.0.37.1 (build 22+37-FR)
OpenJDK 64-Bit Server VM Corretto-22.0.0.37.1 (build 22+37-FR, mixed mode, sharing)

I made sure that I have a javac version

[root@ip-###-##-##-## ec2-user]# javac -version
javac 22

But if I try to run it, I get

[root@ip-172-31-20-12 ec2-user]# javac HelloWorld.java
error: file not found: HelloWorld.java
Usage: javac <options> <source files>
use --help for a list of possible options

Been at this for some time but I don't know why it can't find my file.

Link godaddy domain with aws and host wordpress site

I have a GoDaddy domain which I want to host in AWS, now after a lot of googling I am mostly getting R53 and LightSail as best options. So, i wanted to know which will be the best option ? My site is a simple WordPress portfolio site which is currently in my localhost, now I want to host it. which will be most affordable and easy to use. Another question is, there is already one ec2 instance running so if i use R53 will it use the same ec2 instance to host my WordPress site ? (Do not recommend other shared hosting as my clients requirement is aws)

Where do I put my PHP files on my EC2 Ubuntu server?

I've been working on my first full stack web dev project for a little while now. To summarize it, I have a small Vite app with a form that can take in data from a user and then use a php file to submit that data to a mySQL server. It can also read from the mySQL server and place its data in a table using another php file.

I'm very new to PHP, but this all works on a local level. I am able to run the preview version of my app on localhost and connect to a database on XAMPP. For XAMPP, I know to put my php files in the htdocs folder that is part of the application. However, I do not know where these files should go on the EC2 Ubuntu server I have running, using Caddy. The application runs normally there, and I have mysql with the same database there, but I don't know how to "connect" my typescript based Vite app with the mySQL database using those files I have written. Thus, no data appears in the table, nor can I submit to it. I get a network error.

Basically, is there an equivalent to the XAMPP 'htdocs' folder for an EC2 ubuntu server that a Vite app will understand? I would place them wherever "localhost/" is, but once my application is compiled, I don't know where that is.

(You might ask why I am using PHP which I am unfamiliar with rather than a typescript backend, and it is because later on I want to be able to generate new html code, which to my understanding typescript cannot do)

So far, I have tried putting them in a few places: the public folder, the src folder, and root folder. I have also seen other posts suggesting places like /var/www/html, but that doesn't seem to apply to me. I don't even see that directory or /var/www. I am running my app using Caddy, not Apache.

No matter which place I try, I am getting this network error. If you think it may not just be a matter of file location but a security issue, feel free to ask me addition questions! Since this is a new instance and is still being set up with https, it may be a CORS problem as well. I'm partially asking this question because I really just don't know how php works and would like clarification I haven't been able to find elsewhere.

AWS EC2 MySQL Client Installation

I am installing MySQL client in my EC2 instance via Cloudformation using the below command in userdata:

- yum install -y https://dev.mysql.com/get/mysql80-community-release-el7-1.noarch.rpm
- yum install -y mysql-community-client.x86_64

But after logging-in to the EC2, I found that I cannot execute certain commands like

mysqldump
mysql

I checked in /usr/bin and found that mysql, mysqldump, mysqlcheck and other mysql related commands are not there.

At last I checked the system log and cloud-init log of the EC2 server and found below log:

[   75.069708] cloud-init[2691]: Installing:
[   75.071941] cloud-init[2691]: mysql-community-client         x86_64 8.0.28-1.el7     mysql80-community  53 M
[   75.076458] cloud-init[2691]: mysql-community-libs           x86_64 8.0.28-1.el7     mysql80-community 4.7 M
[   75.096330] cloud-init[2691]: replacing  mariadb-libs.x86_64 1:5.5.68-1.amzn2
[   75.099727] cloud-init[2691]: mysql-community-libs-compat    x86_64 8.0.28-1.el7     mysql80-community 1.2 M
[   75.104188] cloud-init[2691]: replacing  mariadb-libs.x86_64 1:5.5.68-1.amzn2
[   75.107629] cloud-init[2691]: Installing for dependencies:
[   75.110473] cloud-init[2691]: mysql-community-client-plugins x86_64 8.0.28-1.el7     mysql80-community 5.7 M
[   75.116311] cloud-init[2691]: mysql-community-common         x86_64 8.0.28-1.el7     mysql80-community 630 k
[   75.116510] cloud-init[2691]: ncurses-compat-libs            x86_64 6.0-8.20170212.amzn2.1.3
[   75.116976] cloud-init[2691]: amzn2-core        308 k
[   75.117416] cloud-init[2691]: Transaction Summary
[   75.117888] cloud-init[2691]: ================================================================================
[   75.118345] cloud-init[2691]: Install  3 Packages (+3 Dependent packages)
[   75.118801] cloud-init[2691]: Total download size: 65 M
[   75.119227] cloud-init[2691]: Downloading packages:
[   75.393593] cloud-init[2691]: warning: /var/cache/yum/x86_64/2/mysql80-community/packages/mysql-community-client-plugins-8.0.28-1.el7.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID 3a79bd29: NOKEY
[   75.399647] cloud-init[2691]: Public key for mysql-community-client-plugins-8.0.28-1.el7.x86_64.rpm is not installed
[   76.034754] cloud-init[2691]: --------------------------------------------------------------------------------
[   76.039388] cloud-init[2691]: Total                                               67 MB/s |  65 MB  00:00
[   76.043679] cloud-init[2691]: Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
[   76.075984] cloud-init[2691]: Importing GPG key 0x5072E1F5:
[   76.078955] cloud-init[2691]: Userid     : "MySQL Release Engineering <[email protected]>"
[   76.082649] cloud-init[2691]: Fingerprint: a4a9 4068 76fc bd3c 4567 70c8 8c71 8d3b 5072 e1f5
[   76.085875] cloud-init[2691]: Package    : mysql80-community-release-el7-1.noarch (installed)
[   76.089472] cloud-init[2691]: From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
[   76.093229] cloud-init[2691]: Public key for mysql-community-client-8.0.28-1.el7.x86_64.rpm is not installed
[   76.097724] cloud-init[2691]: Failing package is: mysql-community-client-8.0.28-1.el7.x86_64
[   76.101487] cloud-init[2691]: GPG Keys are configured as: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql

How I can resolve this issue? Please suggest.

Update: The instance is: Amazon Linux 2

User Data:

Fn::Base64: !Sub |
                #cloud-config
                repo_update: true

                write_files:
                  - content: |
                        REGION=${Region}
                        ENV=${Env}
                    path: /etc/environment
                    append: true
                  - content: "${EFSFileSystem}:/ /efs efs defaults,_netdev 0 0"
                    path: /etc/fstab
                    append: true

                packages:
                  - amazon-efs-utils
                  - jq
                  - nfs-utils
                  - ruby
                  - unzip
                  - wget
                package_update: true
                package_upgrade: true

                runcmd:
                  - [ mkdir, /efs ]
                  - [ mount, /efs ]
                  - [ sh, -c, "amazon-linux-extras install -y nginx1.12 php7.4" ]
                  - yum install -y php-opcache php-gd php-mbstring php-pecl-zip php-xml
                  - yum install -y https://dev.mysql.com/get/mysql80-community-release-el7-1.noarch.rpm
                  - yum install -y mysql-community-client.x86_64
                  - wget -q https://s3.amazonaws.com/amazoncloudwatch-agent/linux/amd64/latest/AmazonCloudWatchAgent.zip -O /tmp/AmazonCloudWatchAgent.zip
                  - unzip -d /tmp/AmazonCloudWatchAgentInstaller /tmp/AmazonCloudWatchAgent.zip
                  - rpm -ivh /tmp/AmazonCloudWatchAgentInstaller/amazon-cloudwatch-agent.rpm
                  - /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m ec2 -c ssm:${CloudwatchConfigSsm} -s
                  - wget -q https://aws-codedeploy-ap-northeast-1.s3.ap-northeast-1.amazonaws.com/latest/install -O /tmp/aws-codedeploy-install.rb
                  - [ ruby, /tmp/aws-codedeploy-install.rb, auto ]
                  - systemctl enable nginx
                  - service codedeploy-agent start

No GPU EC2 instances associated with AWS Batch

I need to set up GPU backed instances on AWS Batch.

Here's my .yaml file:

  GPULargeLaunchTemplate:
    Type: AWS::EC2::LaunchTemplate
    Properties:
      LaunchTemplateData:
        UserData:
          Fn::Base64:
            Fn::Sub: |
              MIME-Version: 1.0
              Content-Type: multipart/mixed; boundary="==BOUNDARY=="

              --==BOUNDARY==
              Content-Type: text/cloud-config; charset="us-ascii"

              runcmd:
                - yum install -y aws-cfn-bootstrap
                - echo ECS_LOGLEVEL=debug >> /etc/ecs/ecs.config
                - echo ECS_IMAGE_CLEANUP_INTERVAL=60m >> /etc/ecs/ecs.config
                - echo ECS_IMAGE_MINIMUM_CLEANUP_AGE=60m >> /etc/ecs/ecs.config
                - /opt/aws/bin/cfn-init -v --region us-west-2 --stack cool_stack --resource LaunchConfiguration
                - echo "DEVS=/dev/xvda" > /etc/sysconfig/docker-storage-setup
                - echo "VG=docker" >> /etc/sysconfig/docker-storage-setup
                - echo "DATA_SIZE=99%FREE" >> /etc/sysconfig/docker-storage-setup
                - echo "AUTO_EXTEND_POOL=yes" >> /etc/sysconfig/docker-storage-setup
                - echo "LV_ERROR_WHEN_FULL=yes" >> /etc/sysconfig/docker-storage-setup
                - echo "EXTRA_STORAGE_OPTIONS=\"--storage-opt dm.fs=ext4 --storage-opt dm.basesize=64G\"" >> /etc/sysconfig/docker-storage-setup
                - /usr/bin/docker-storage-setup
                - yum update -y
                - echo "OPTIONS=\"--default-ulimit nofile=1024000:1024000 --storage-opt dm.basesize=64G\"" >> /etc/sysconfig/docker
                - /etc/init.d/docker restart

              --==BOUNDARY==--
      LaunchTemplateName: GPULargeLaunchTemplate

  GPULargeBatchComputeEnvironment:
    DependsOn:
      - ComputeRole
      - ComputeInstanceProfile
    Type: AWS::Batch::ComputeEnvironment
    Properties:
      Type: MANAGED
      ComputeResources:
        ImageId: ami-GPU-optimized-AMI-ID
        AllocationStrategy: BEST_FIT_PROGRESSIVE
        LaunchTemplate:
          LaunchTemplateId:
            Ref: GPULargeLaunchTemplate
          Version:
            Fn::GetAtt:
              - GPULargeLaunchTemplate
              - LatestVersionNumber
        InstanceRole:
          Ref: ComputeInstanceProfile
        InstanceTypes:
          - g4dn.xlarge
        MaxvCpus: 768
        MinvCpus: 1
        SecurityGroupIds:
          - Fn::GetAtt:
              - ComputeSecurityGroup
              - GroupId
        Subnets:
          - Ref: ComputePrivateSubnetA
        Type: EC2
        UpdateToLatestImageVersion: True

  MyGPUBatchJobQueue:
    Type: AWS::Batch::JobQueue
    Properties:
      ComputeEnvironmentOrder:
        - ComputeEnvironment:
            Ref: GPULargeBatchComputeEnvironment
          Order: 1
      Priority: 5
      JobQueueName: MyGPUBatchJobQueue
      State: ENABLED

  MyGPUJobDefinition:
    Type: AWS::Batch::JobDefinition
    Properties:
      Type: container
      ContainerProperties:
        Command:
          - "/opt/bin/python3"
          - "/opt/bin/start.py"
          - "--retry_count"
          - "Ref::batchRetryCount"
          - "--retry_limit"
          - "Ref::batchRetryLimit"
        Environment:
          - Name: "Region"
            Value: "us-west-2"
          - Name: "LANG"
            Value: "en_US.UTF-8"
        Image:
          Fn::Sub: "cool_1234_abc.dkr.ecr.us-west-2.amazonaws.com/my-image"
        JobRoleArn:
          Fn::Sub: "arn:aws:iam::cool_1234_abc:role/ComputeRole"
        Memory: 16000
        Vcpus: 1
        ResourceRequirements:
          - Type: GPU
            Value: '1'
      JobDefinitionName: MyGPUJobDefinition
      Timeout:
        AttemptDurationSeconds: 500

When I start a job, the job is stuck in RUNNABLE state forever, then I did these:

  1. When I swapped the instance type to be normal CPU types, redeploy the CF stack, submit a job and the job could be run and succeeded fine, so must be something missing/wrong with the way I use these GPU instance types on AWS Batch;
  2. Then I found this post, so I added an ImageId field in my ComputeEnvironment with a known GPU optimized AMI, but still no luck;
  3. I did a side by side comparison for the jobs between the working CPU AWS Batch and non-working GPU AWS Batch via running aws batch describe-jobs --jobs AWS_BATCH_JOB_EXECUTION_ID --region us-west-2, I found that what's missing between them is: containerInstanceArn and taskArn where in the non-working GPU instance, these two fields are just missing.
  4. I found that in the ASG (Auto Scaling Group) created by the Compute Environment, this GPU instance is in this ASG, but when I go to the ECS, chose this GPU cluster, there's no container instances associated with it, unlike the working CPU ones, where the ECS cluster has container instances within.

Any ideas how to fix this would be greatly appreciated!

The EBS direct APIs AmazonEBSClient.GetSnapshotBlockAsync is very slow

I use EBS direct APIs to access the contents of an EBS snapshot, when I called the .NET API AmazonEBSClient.GetSnapshotBlockAsync to get the data in block, the response was very slow, almost 5 minutes after I called the GetSnapshotBlockAsync, there was no exceptions and throttling errors during the call. The size of the snapshot is a little big, which is at TB level, I was wondering why the API is so slow?

any method to speed up the API response or do I use the API in a wrong way?

Thanks๏ผŒ

Wordpress website is not loading the header images

I have configured the wordpress website in EC2 instance I accessed the wp-config.php file and edited the url sturcture from http to https but the header images in the website are not getting loaded

what else should i do to get the images displayed? shall i try the reset from the appearance section or something else? I gone through many such queries before posting this question but still not able to resolve

Not able to secure wordpress website on AWS

I have tried to secure my wordpress website hosted on AWS EC2 instance but after following many tutorials on Youtube still not able to do it

Steps I followed

  1. EC2 Instance configured with WordPress Certified by Bitnami and Automattic-6.4.2-8- r11 on Debian 11-AutogenByAWSMP--2
  2. Loadbalancer configured
  3. Loadbalancer added to EC2 instance
  4. Created SSL certificate with ACM
  5. Used Route 53 and changed nameservers in Godaddy
  6. Created https listener with port and instance port of 443
  7. Create Route 53 SSL certificate records
  8. Used the security group created by WordPress Certified by Bitnami and Automattic- 6.4.2-8-r11 on Debian 11-AutogenByAWSMP--2
  9. Used Security policy for ELB - ELBSecurityPolicy-2016-08

and then still the website is not showing that it is secured What shall i do about it?

SMTP server in EC2 instance: Relay access denied

I am trying to set up SMTP server in EC2 by installing postfix in EC2 Ubuntu instance. But when i try to run my nodeJS code to send mails, am getting the below error.

 Error: Can't send mail - all recipients were rejected: 454 4.7.1 <[email protected]>: Relay access denied
    at SMTPConnection._formatError (C:\Users\path\node_modules\nodemailer\lib\smtp-connection\index.js:790:19)
    at SMTPConnection._actionRCPT (C:\Users\path\node_modules\nodemailer\lib\smtp-connection\index.js:1654:28)
    at SMTPConnection.<anonymous> (C:\Users\path\node_modules\nodemailer\lib\smtp-connection\index.js:1607:30)
    at SMTPConnection._processResponse (C:\Users\path\node_modules\nodemailer\lib\smtp-connection\index.js:969:20)
    at SMTPConnection._onData (C:\Users\path\node_modules\nodemailer\lib\smtp-connection\index.js:755:14)
    at SMTPConnection._onSocketData (C:\Users\path\node_modules\nodemailer\lib\smtp-connection\index.js:193:44)
    at TLSSocket.emit (node:events:514:28)
    at addChunk (node:internal/streams/readable:324:12)
    at readableAddChunk (node:internal/streams/readable:297:9)
    at Readable.push (node:internal/streams/readable:234:10) {
  code: 'EENVELOPE',
  response: '454 4.7.1 <[email protected]>: Relay access denied',
  responseCode: 454,
  command: 'RCPT TO',
  rejected: [ '<[email protected]>' ],
  rejectedErrors: [
    Error: Recipient command failed: 454 4.7.1 <[email protected]>: Relay access denied
        at SMTPConnection._formatError (C:\Users\path\node_modules\nodemailer\lib\smtp-connection\index.js:790:19)
        at SMTPConnection._actionRCPT (C:\Users\path\node_modules\nodemailer\lib\smtp-connection\index.js:1640:24)
        at SMTPConnection.<anonymous> (C:\Users\path\node_modules\nodemailer\lib\smtp-connection\index.js:1607:30)
        at SMTPConnection._processResponse (C:\path\burra\node_modules\nodemailer\lib\smtp-connection\index.js:969:20)
        at SMTPConnection._onData (C:\Users\path\node_modules\nodemailer\lib\smtp-connection\index.js:755:14)
        at SMTPConnection._onSocketData (C:\Users\path\node_modules\nodemailer\lib\smtp-connection\index.js:193:44)
        at TLSSocket.emit (node:events:514:28)
        at addChunk (node:internal/streams/readable:324:12)
        at readableAddChunk (node:internal/streams/readable:297:9)
        at Readable.push (node:internal/streams/readable:234:10) {
      code: 'EENVELOPE',
      response: '454 4.7.1 <[email protected]>: Relay access denied',
      responseCode: 454,
      command: 'RCPT TO',
      recipient: '<[email protected]>'
    }
  ]
}

I am not sure what went wrong and how am supposed to fix. Any help would be appreciated. Thanks in Advance!

I did set up the SMTP server and made necessary configurations like enabling the port, inbound settings in EC2. But none seem to work.

This is the code I've been trying to run.

const nodemailer = require('nodemailer');

const transporter = nodemailer.createTransport({
    host: 'EC2_IP', // Replace with your EC2 instance public IP or domain
    port: 25,
    secure: false,
    tls: {
        rejectUnauthorized: false
    },
    auth: {
        user: 'EC2_UBUNTU_LOGIN_USERNAME', // Use the system user associated with Postfix
        pass: 'EC2_UBUNTU_LOGIN_PASSWORD' // Use the system user's password
    }
});

const mailOptions = {
    from: '[email protected]',
    to: '[email protected]',
    subject: 'Test Email',
    text: 'This is a test email from your Node.js script.'
};

transporter.sendMail(mailOptions, (error, info) => {
    if (error) {
        return console.error('Error:', error);
    }
    console.log('Email sent:', info.response);
});

Error creando sitio web en IIS , cant reach this page

estuve creando un sitio en un ec2 de aws con windows server, instale iis basico solo con autenticacion basica de extras aรฑadidos, y al agregar mi sitio selecciono la ruta de la carpeta en la que tengo un index.html con sus imagenes y su css y tal y cuando voy a comprobar abriendolo en el navegador, ya para empezar en local me sale un error de que can't reach this page, incluse probe creando un index.html distinto nuevo sin subcarpetas y aun asi me sigue dando el error.

He intentado con un nuevo sitio web mirando foros y videos por si hice alguna instalaciรณn mal pero no consigo dar con la solucion

How to recover ssh key for an Ubuntu Server on AWS

I've recently resumed working on a webapp for my startup and I've got access to 3 of the 4 SSH keys I need to access my 4 EC2 instances (Ubuntu 19). The problem is that I can't currently retrieve the 4th SSH key.

Info:

  • All 4 EC2 instances are happily running
  • The 4th EC2 instance is running MySQL
  • The 4th EC2 instance has an AMI image (which contains the SSH key)

Long story short, I need to organise some data recovery in order to obtain the 4th SSH key so I'm just trying to save time by retrieving the SSH key quicker.

Any useful ideas would be gratefully received.

Thank you.

Apache FTP Server - Connection timeout after 20 seconds of inactivity - Failed to retrieve directory listing

I was looking for a way to create an embedded ftp server. I came across an example "writing a java ftp server", which I copied and tested locally and everything seemed fine.

So I went on and deployed the example to an AWS EC2 instance and then tried to access it using Ubuntu and FileZilla as clients but I keep getting the same issue where directories do not get listed. I've even tried opening all ports for the instance and I still get the same thing:

FileZilla Output

I thought it may have something to do with Active/Passive mode settings so I tried with both on FileZilla but I still get the same issue so I'm all out of ideas. Does anyone know how to resolve this?

AWS instance type is not supported by current Deep Learning AMI

I'm running code on an EC2 instance:

  • Instance type: t2.micro
  • Image: Deep Learning AMI GPU PyTorch 2.0.1 (Amazon Linux 2) 20230815

Before running the code, I need to conda activate pytorch. When I launched this in August, it worked fine. However, I had to restart the instance and ran into this error:

ERROR: Please note that the Amazon EC2 t2.micro instance type is not supported by current Deep Learning AMI.

Has anyone seen this before? Is this a new stipulation? I'm in a tough place now, because I assumed I'd be able to start and stop the instance without issue, but now it looks like I'll need to pay for a larger instance.

โŒ
โŒ