tl;dr: DarwixAI DevOps Test

πŸ“ Architecture Overview

Task details and how I’ll complete the assignment:

A custom VPC with:

  • 2 public subnets (for ALB, NAT Gateway)
  • 2 private subnets (for EC2 via Auto Scaling Group)
  • An Internet Gateway for public internet access
  • A NAT Gateway for private subnet outbound access

A Security setup:

  • ALB accessible on ports 80/443 from the internet
  • EC2 instances accessible only from the ALB
  • SSH restricted to a specific IP

An Application Load Balancer (ALB) in front of EC2s

An Auto Scaling Group (ASG) with launch templates for EC2 (Ubuntu, t2.micro)

  • I should have used amazon linux as base image, as it’s more convenient to install cloud watch than on ubuntu, but what I can say I’m biased towards ubuntu

CloudWatch Logs for NGINX access and error logs

πŸ—‚οΈ Directory Structure

aws-terraform-setup/
β”œβ”€β”€ .github/
β”‚   └── workflows/
β”‚       └── terraform.yml   # GitHub Actions workflow
β”œβ”€β”€ main.tf                 # Main Terraform config
β”œβ”€β”€ userdata.sh             # User data script for EC2
└── README.md               # This documentation

🧱 Infrastructure Stack

VPC and Subnets


resource "aws_vpc" "main" {{
  cidr_block = "10.0.0.0/16"
  enable_dns_support = true
  enable_dns_hostnames = true
}}

resource "aws_subnet" "public" {{
  count = 2
  vpc_id = aws_vpc.main.id
  cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 8, count.index)
  map_public_ip_on_launch = true
  availability_zone = data.aws_availability_zones.available.names[count.index]
}}

resource "aws_subnet" "private" {{
  count = 2
  vpc_id = aws_vpc.main.id
  cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 8, count.index + 2)
  availability_zone = data.aws_availability_zones.available.names[count.index]
}}

Internet and NAT Gateway


resource "aws_internet_gateway" "igw" {{
  vpc_id = aws_vpc.main.id
}}

resource "aws_eip" "nat" {{
  domain = "vpc"
}}

resource "aws_nat_gateway" "nat" {{
  allocation_id = aws_eip.nat.id
  subnet_id = aws_subnet.public[0].id
}}

Route Tables and Associations


resource "aws_route_table" "public" {{
  vpc_id = aws_vpc.main.id
  route {{
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.igw.id
  }}
}}

resource "aws_route_table" "private" {{
  vpc_id = aws_vpc.main.id
  route {{
    cidr_block = "0.0.0.0/0"
    nat_gateway_id = aws_nat_gateway.nat.id
  }}
}}

πŸ” Security Groups


resource "aws_security_group" "alb_sg" {{
  name        = "alb-sg"
  vpc_id      = aws_vpc.main.id

  ingress {{
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }}

  egress {{
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }}
}}

πŸš€ EC2 Launch Template and ASG

User Data Script

#!/bin/bash
apt update
apt install -y nginx awslogs
echo "<h1>Welcome to DevOps Assessment</h1>" > /var/www/html/index.html
systemctl start nginx
systemctl enable nginx

Terraform Launch Template


resource "aws_launch_template" "web_lt" {{
  image_id      = data.aws_ami.ubuntu.id
  instance_type = "t2.micro"
  user_data     = base64encode(file("userdata.sh"))
}}

🌐 ALB + Target Group + Listener


resource "aws_lb" "app_alb" {{
  name               = "app-alb"
  internal           = false
  load_balancer_type = "application"
  subnets            = aws_subnet.public[*].id
  security_groups    = [aws_security_group.alb_sg.id]
}}

πŸ€– GitHub Actions CI/CD

.github/workflows/terraform.yml:


name: Terraform CI

on:
  push:
    branches: [ main ]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v3

      - name: Terraform Init
        run: terraform init

      - name: Terraform Plan
        run: terraform plan

      - name: Terraform Apply
        run: terraform apply -auto-approve
        env:
          AWS_ACCESS_KEY_ID: ${{{{ secrets.AWS_ACCESS_KEY_ID }}}}
          AWS_SECRET_ACCESS_KEY: ${{{{ secrets.AWS_SECRET_ACCESS_KEY }}}}

🌍 Access the Application

Visit: http://app-alb-1956169530.ap-south-1.elb.amazonaws.com in your browser to view the NGINX welcome page.

πŸ“ GitHub Repo

Link: Github Repo to check .tf and user_data.sh files for complete code

πŸ“ Summary

For this assignment, main objective was to design and deploy a highly available web-app on AWS. So to achieve this I used Terraform, we could have used aws-cli + a bash script to automate the process but this won’t be a good approach, as I have to pass my aws login, IAM user access tokens and other secrets through cli which is not a safe option if security if our main concern, and also it’s difficult to make it scalable as we have to use multiple aws commands to provision nat, sg, ec2 and other aws resources. I began by creating a custom VPC in the ap-south-1 region with both public and private subnets distributed across multiple Availability Zones to ensure fault tolerance. An Internet Gateway was configured for public subnet access, while a NAT Gateway provided secure internet access to the private subnets. I set up an Application Load Balancer (ALB) to handle incoming HTTP and HTTPS traffic, with a Security Group that only allows access on ports 80 and 443. Behind the ALB, I created an Auto Scaling Group (ASG) using a launch template for Ubuntu-based t2.micro EC2 instances, which were configured with a user data script to install and run an NGINX server hosting a simple static HTML page that says β€œWelcome to DevOps Assessment.” Additional Security Groups ensure that only the ALB can reach the EC2 instances, and SSH access is tightly restricted to a specific IP. For monitoring, I configured CloudWatch Logs(I forgot to pass the cloud watch logs path is user_data.sh) to capture application logs from each instance. The entire process is automated through a GitHub Actions workflow that runs Terraform commands to provision and manage the infrastructure, demonstrating a complete Infrastructure as Code (IaC) and CI/CD pipeline approach on AWS.

❓ Q&A

IAM roles and policies for EC2, CodePipeline, and S3 access

EC2 Instance Role
  • Allows EC2 instances to interact with AWS services (e.g., CloudWatch for logs, SSM for management)
  • Best Practice: Use an instance profile (aws_iam_instance_profile) rather than embedding credentials
  • Attach AWS-managed policies (e.g., CloudWatchAgentServerPolicy) for standardized permissions.
CodePipeline Role
  • Grants permissions for CI/CD workflows (e.g., fetching code, deploying infrastructure).
  • Best Practice: Restrict S3 access to only the artifact bucket
  • Should allow interactions with CodeBuild, ECS, or Lambda if used in the pipeline.
S3 Bucket Policy
  • Ensures only authorized roles (e.g., CodePipeline) can read/write to the artifact bucket
  • Best Practice: Use bucket policies + IAM roles for least-privilege access

Best Practices for Secret Management (Secrets should never be hardcoded)

AWS Secrets Manager
  • Integrates with RDS, Lambda, and ECS natively.
  • Use Case: Production-grade secrets requiring automatic rotation
AWS Systems Manager (SSM) Parameter Store
  • Cheaper than Secrets Manager
  • Use Case: Mainly used for api endpoints
Best Practices
  • Never store secrets in Git
  • Inject secrets at runtime