Cloud Architecture

From On-Prem to AWS: Building a Production-Grade 3-Tier App

A developer's guide to understanding why every AWS service exists — starting from what the business actually needs, not the other way around.

AWS Architecture VPC EC2 RDS ALB Production

What's in this article

  1. What is a 3-tier application?
  2. How it looks on-premises
  3. The problems with on-prem
  4. Moving to AWS — the foundation (VPC)
  5. Tier 1: Presentation layer (ALB + EC2)
  6. Tier 2: Application layer (EC2 + Auto Scaling)
  7. Tier 3: Database layer (RDS)
  8. Adding security (Security Groups, IAM, WAF, ACM)
  9. Adding observability (CloudWatch, CloudTrail)
  10. Supporting services (S3, Secrets Manager)
  11. The big picture — full production architecture
01

What is a 3-tier application?

Before we touch any cloud service, let's make sure we're talking about the same thing. A 3-tier application is one of the most common patterns in software — it's what powers most web apps, internal tools, and SaaS products you've built or worked on.

The idea is simple: split your application into three distinct layers, each responsible for one job.

Tier 1
Presentation
Tier 2
Application
Tier 3
Data

Tier 1 — Presentation layer: This is what the user sees. It could be a React app, a server-rendered HTML page, a mobile app's API gateway, or even just a reverse proxy. Its job is to receive requests from the outside world and hand them off to your business logic.

Tier 2 — Application layer: This is where your business logic lives. Your Node.js API, your Django backend, your Java Spring service — this tier processes requests, runs calculations, applies rules, and decides what data to read or write.

Tier 3 — Data layer: This is where data lives. Your PostgreSQL database, MySQL, or any persistent store. The application tier talks to this layer, not the user. Users never touch your database directly.

Why split it this way? Each tier can scale independently. You can run 10 application servers with 1 database. You can deploy a new frontend without touching the backend. And if one layer is compromised, attackers can't automatically jump to the others.
02

How it looks on-premises

On-premises (or "on-prem") means you own the hardware. Your company has a server room, or racks in a data centre, and you buy, rack, and manage physical machines. Let's walk through what this looks like for a typical 3-tier web app.

On-premises 3-tier architecture
On-premises 3-tier architecture Three physical server tiers: load balancer/web server at the top, application servers in the middle, and database server at the bottom, connected by arrows inside a corporate data centre boundary. CORPORATE DATA CENTRE Internet users HTTP / HTTPS PRESENTATION TIER Load Balancer Web Server (Nginx) APPLICATION TIER App Server 1 App Server 2 App Server 3 DATA TIER Primary DB (MySQL) Standby DB sync

In this setup, you have a physical machine acting as a load balancer (often HAProxy or Nginx), a few physical servers running your application, and a primary database server with possibly a standby replica. Everything lives in your data centre, connected by your internal network.

The IT team provisions servers, installs the OS, configures networking, and maintains everything from hardware to software patches. When a disk fails at 2am, someone gets paged.

03

The problems with on-prem — and why the cloud exists

On-prem works. Companies ran on it for decades. But it comes with a set of hard problems that every engineering team eventually hits:

Capacity planning is a guessing game. You have to decide how many servers to buy months in advance. Buy too few and you get crushed during peak traffic. Buy too many and expensive hardware sits idle most of the year.

Scaling is slow and painful. Need another app server? Order hardware. Wait weeks for delivery. Rack it. Cable it. Install the OS. Configure it. Join it to your cluster. You're looking at weeks, sometimes months, to add capacity.

High availability requires duplicate hardware. To survive a server failure, you need at least two of everything — two load balancers, multiple app servers, primary and standby databases. That doubles your hardware cost just to be resilient.

Geographic expansion is a major project. Want to serve users in another country with lower latency? You need another data centre, another set of hardware, another team, another network contract.

Maintenance is your problem. OS patches, hardware replacements, network upgrades, data centre power and cooling — all your responsibility.

The key insight: The cloud doesn't eliminate these problems — it changes who solves them. AWS owns the hardware. You get a programmable API on top of it. That fundamental shift is what all the services are built around.
04

Moving to AWS — your first requirement: a private network

When you move to AWS, you're getting access to massive shared infrastructure. But that raises an immediate question: how do you keep your servers private? How do you make sure your database isn't accessible to other AWS customers, or to the open internet?

The requirement: I need my own isolated network inside AWS — one where I control what can reach what, just like my data centre's internal network.

The solution: Amazon VPC (Virtual Private Cloud).

Amazon VPC — your private slice of AWS

A VPC is a logically isolated network that you define and control. Think of it as your own private data centre's network topology — but software-defined, not physical cables. When you create a VPC, you choose an IP address range (e.g. 10.0.0.0/16) and then subdivide it into subnets.

Inside a VPC, traffic between your resources is completely private. Nothing gets in or out unless you explicitly allow it.

Subnets — public vs private

A subnet is a range of IP addresses within your VPC. Here's where it gets important: subnets come in two flavours, and the distinction is critical for security.

Public subnets are connected to an Internet Gateway — a resource that lets traffic flow to and from the internet. You put things here that need to be reachable from the outside world: your load balancer, for example.

Private subnets have no direct path to the internet. You put your application servers and databases here. They can't be reached from the internet, even if an attacker knows their IP addresses.

Think of it like a building: The lobby (public subnet) is accessible to visitors. The server room (private subnet) requires keycard access and has no windows. Your load balancer lives in the lobby. Your database lives in the server room.

Availability Zones — solving the data centre failure problem

AWS regions (like Sydney, Virginia, Singapore) contain multiple physically separate data centres called Availability Zones (AZs). They're close enough to have low network latency between them, but far enough apart that a power failure, flood, or fire in one doesn't affect the others.

For a production application, you always spread across at least two AZs. That means duplicating your subnets across AZs — one public subnet in AZ-a, another public subnet in AZ-b, one private subnet in AZ-a, another in AZ-b, and so on.

VPC structure — subnets across two Availability Zones
VPC with public and private subnets across two Availability Zones A VPC boundary containing two Availability Zones. Each AZ has a public subnet with a load balancer, and a private subnet containing both an app server and RDS database stacked vertically. Internet Internet Gateway VPC 10.0.0.0/16 AZ — a AZ — b PUBLIC SUBNET 10.0.1.0/24 Load Balancer (ALB) PUBLIC SUBNET 10.0.2.0/24 Load Balancer (ALB) PRIVATE SUBNET 10.0.11.0/24 EC2 App Server RDS Primary AZ-a · Reads + Writes Auto backups · Multi-AZ PRIVATE SUBNET 10.0.12.0/24 EC2 App Server RDS Standby AZ-b · Failover only Sync replication from Primary sync

NAT Gateway — giving private servers outbound internet access

Here's a problem you'll hit quickly: your app servers are in private subnets. They can't be reached from the internet. But they still need to reach the internet — to download software packages, call third-party APIs, or pull from AWS services.

The requirement: My private servers need to initiate outbound connections to the internet, but nothing from the internet should be able to initiate a connection back to them.

The solution: NAT Gateway. It sits in your public subnet, and private resources route their outbound traffic through it. Responses come back in. But nobody from the outside can knock on your private server's door first. It's a one-way valve.

05

Tier 1: The Presentation Layer — ALB and EC2

Now that we have a network, let's build the first tier. Back on-prem, we had Nginx running on a physical box acting as a load balancer. In AWS, we separate these concerns.

The load balancer problem — and ALB

The requirement: I have multiple application servers. I need incoming traffic distributed across all of them. I also need the load balancer to be smart — if one server goes down, stop sending traffic to it. And I need it to be highly available itself (no single point of failure).

The solution: Application Load Balancer (ALB).

An ALB operates at Layer 7 (the application layer), which means it understands HTTP and HTTPS. It can do things a basic Layer 4 load balancer can't:

The ALB is deployed across multiple AZs. AWS manages the underlying infrastructure — you never think about the load balancer's server failing because AWS handles that automatically.

Running your web servers — EC2

The requirement: I need virtual machines to run my web server or frontend application (Nginx, a static file server, or a server-side rendering app).

The solution: Amazon EC2 (Elastic Compute Cloud).

EC2 gives you virtual machines — called instances — on demand. You pick the size (CPU, RAM, network), pick an operating system, and have a running server in seconds. A few things worth understanding:

Instance types: EC2 instances come in families. t3.medium is burstable compute, good for variable workloads. c6i.xlarge is compute-optimised, good for CPU-heavy processing. r6g.large is memory-optimised. You pick what fits your workload.

AMIs (Amazon Machine Images): An AMI is a snapshot of a fully configured server — OS, installed software, configuration. Instead of installing everything by hand on each new server, you bake your configuration into an AMI and launch identical copies from it. This is how you scale quickly.

EBS (Elastic Block Store): Your EC2 instance needs a disk. EBS provides persistent block storage. Unlike the instance itself, an EBS volume persists even if the instance stops or terminates. You can also snapshot EBS volumes for backup.

Web tier decision: For modern apps, many teams skip EC2 in the presentation tier entirely and serve their frontend from S3 + CloudFront (AWS's CDN). Static React/Vue apps are just files — you don't need a server to serve them. We'll cover this when we talk about S3.
06

Tier 2: The Application Layer — EC2, Auto Scaling, and Target Groups

The application tier is where your actual business logic runs. This is typically the most complex part to scale because it's stateful in interesting ways — sessions, in-memory caches, long-running processes.

Auto Scaling — the killer feature of cloud compute

The requirement: My traffic is unpredictable. On Monday morning it spikes, at 3am it drops to almost nothing. I don't want to pay for 10 servers that sit idle most of the time, but I also can't afford to drop requests during a traffic spike. I want the number of servers to automatically adjust based on actual demand.

The solution: Auto Scaling Groups (ASG).

An Auto Scaling Group maintains a fleet of EC2 instances. You define minimum instances (e.g. 2 — always on for availability), maximum instances (e.g. 10 — the ceiling), and scaling policies based on metrics like CPU usage or request count.

When CPU crosses 70%, the ASG launches more instances from your AMI. When traffic drops, it terminates excess instances. This is something fundamentally impossible on-prem — you can't conjure or vanish physical servers in minutes.

The ASG works hand-in-hand with the ALB through a Target Group. A target group is just a list of servers that the ALB can send traffic to. When the ASG launches a new instance, it registers it with the target group. When it terminates one, it deregisters it. The ALB automatically starts or stops sending traffic accordingly.

Launch Templates — defining your instance blueprint

An ASG needs to know what kind of instance to launch. A Launch Template captures everything: the instance type, AMI, security groups, IAM role, user data script (startup commands), and storage configuration. When the ASG needs to scale out, it follows the launch template exactly. Every instance that comes up is identical. No snowflakes.

ALB + Auto Scaling Group across two AZs
ALB distributing traffic to Auto Scaling Group instances across two Availability Zones An ALB receives user traffic and distributes it to EC2 instances in an Auto Scaling Group, spread across two Availability Zones. A scale out arrow shows new instances being added. User requests Application Load Balancer Health checks · SSL termination · Path routing TARGET GROUP AUTO SCALING GROUP · min 2 · desired 4 · max 10 AZ — a AZ — b EC2 App Server (AZ-a) EC2 App Server (AZ-a) EC2 App Server (AZ-b) EC2 App Server (AZ-b) New instance (scaling out) CPU > 70% ASG launches new instance CloudWatch alarm triggers ASG scale-out policy
07

Tier 3: The Database Layer — Amazon RDS

Databases are where most startups and engineering teams feel the most pain running on-prem. Backups, replication, patching, failover — it's a second career's worth of expertise to do it properly. The cloud addresses this head-on.

The requirement: I need a reliable relational database (MySQL, PostgreSQL) that automatically handles backups, survives AZ failures, scales its compute, and keeps my data safe. I don't want to manage the OS or the database engine itself — just the schema and queries.

The solution: Amazon RDS (Relational Database Service).

What RDS manages for you

RDS is a managed database service. You tell AWS what database engine you want (MySQL, PostgreSQL, MariaDB, Oracle, SQL Server), what size, and where. AWS handles everything below the application layer:

Multi-AZ deployment — surviving a data centre failure

The requirement: My database can't go down if an AZ fails. I need automatic failover.

With RDS Multi-AZ, AWS maintains a synchronous standby replica in a different AZ. Every write to the primary is simultaneously replicated to the standby. If the primary AZ has an outage, AWS automatically promotes the standby to primary and updates the DNS endpoint — your application reconnects to the new primary. The failover typically completes in 60–120 seconds. No human intervention required.

Important: The Multi-AZ standby is not a read replica — you can't query it. It exists purely for failover. If you want to offload read queries, you add separate Read Replicas.

RDS Read Replicas

The requirement: My app does a lot of reads (analytics queries, dashboard queries, search). They're hammering the primary database. I need to scale read capacity without splitting my dataset.

Read Replicas are asynchronous copies of your database. You can create up to 5 read replicas and point read-heavy queries at them. The primary handles all writes; replicas handle reads. This is how large-scale applications handle reporting queries without impacting transactional throughput.

RDS Multi-AZ with Read Replica
RDS Multi-AZ primary and standby with a read replica App servers write to the primary RDS instance which synchronously replicates to a standby in another AZ. A read replica receives asynchronous replication and handles read queries. App servers Reads + Writes Analytics queries Read only RDS Primary AZ-a · Reads + Writes RDS Standby AZ-b · Failover only Read Replica Async · Read only Writes Sync replication Async Reads Automatic failover
Where we are — VPC with ALB, EC2, and RDS in place
Updated architecture before adding security — VPC with ALB, EC2 ASG, and RDS Multi-AZ Users send HTTP traffic directly to the ALB in public subnets. The ALB routes to EC2 app servers in private subnets across two AZs. Each private subnet contains an EC2 App Server and an RDS instance stacked together — Primary in AZ-a, Standby in AZ-b. Users (HTTP / HTTPS) VPC 10.0.0.0/16 PUBLIC SUBNETS Application Load Balancer Multi-AZ · Health checks · Distributes traffic across app servers NAT Gateway PRIVATE SUBNETS PRIVATE SUBNET AZ-a 10.0.11.0/24 PRIVATE SUBNET AZ-b 10.0.12.0/24 AUTO SCALING GROUP EC2 App Server · IAM Role EC2 App Server · IAM Role EC2 App Server · IAM Role EC2 App Server · IAM Role RDS Primary AZ-a · Multi-AZ Auto backups · Encryption RDS Standby AZ-b · Sync replica Auto failover sync Read Replica Analytics reads Async replication
08

Security — the requirements drive everything

At this point we have a working 3-tier application on AWS. But a working app and a secure app are different things. Let's talk about what security actually means for a production system — and then which AWS services address each requirement.

Requirement 1: Control traffic between tiers

Your app servers should only receive traffic from the load balancer, not directly from the internet. Your database should only receive traffic from your app servers, not from the load balancer or internet. You need rules on every communication path.

The solution: Security Groups.

A Security Group is a stateful virtual firewall that you attach to EC2 instances and RDS databases. You define inbound and outbound rules. For example:

This creates a chain of trust. Even if someone somehow got into your VPC, they can't reach the database unless they're coming from an allowed app server. The SG reference (rather than IP address) is powerful — it doesn't break when app servers scale in/out with different IPs.

Requirement 2: Control what AWS services my EC2 can access

Your app servers need to read from S3, write to SQS, access Secrets Manager, or call other AWS services. You don't want to hard-code AWS credentials (access keys) in your code or configuration — that's a serious security risk.

The solution: IAM Roles attached to EC2 instances.

An IAM Role is a set of permissions that an AWS service can assume. You attach a role to your EC2 instance. When your application code calls AWS SDK functions, the SDK automatically fetches temporary credentials from the instance's metadata endpoint. No access keys in environment variables or config files. The credentials rotate automatically. If the server is compromised, there are no long-lived keys to steal.

Rule of thumb: Every EC2 instance, Lambda function, ECS task, or any compute resource should have an IAM Role. Never use IAM Users with access keys for application code. That's the biggest security mistake teams make early on.

Requirement 3: Protect against web attacks (SQLi, XSS, bots)

Your ALB is facing the internet. That means it's also facing automated scanners, SQL injection attempts, cross-site scripting attacks, credential stuffing bots, and DDoS traffic. You need a layer that can detect and block these before they hit your servers.

The solution: AWS WAF (Web Application Firewall).

AWS WAF sits in front of your ALB (or CloudFront) and evaluates each HTTP request against a set of rules. You can:

Blocked requests never reach your application server. Your servers only see traffic that passed WAF inspection.

Requirement 4: HTTPS — encrypting traffic in transit

The requirement: All user traffic must be encrypted. Users should see a green padlock. We need an SSL/TLS certificate for our domain.

The solution: AWS Certificate Manager (ACM).

ACM provisions and manages SSL/TLS certificates for free. You request a certificate for your domain, verify ownership (via email or DNS), and ACM issues the cert. You then attach it to your ALB. ACM handles renewals automatically — no more scrambling when certificates expire. The ALB terminates TLS, so your app servers communicate over plain HTTP internally on your private network (which is fine — the private network is already isolated).

Security layers — WAF, Security Groups, IAM Roles
WAF sits in front of the ALB filtering web traffic. Security Groups control traffic between tiers. IAM Roles govern service-to-service access within AWS. Internet traffic (HTTPS) AWS WAF SQL injection · XSS · Rate limiting · Bot control ALB + ACM Certificate (TLS termination) SG: Allow 443 from 0.0.0.0/0 SG: Allow from ALB SG only EC2 App Servers IAM Role attached No hardcoded credentials S3 Secrets Mgr SQS SG: Allow from App SG only RDS (private subnet) Not reachable from internet · SG blocks all other sources
09

Observability — you can't fix what you can't see

Your app is running. Traffic is flowing. But you're flying blind if you don't know how your system is behaving. Is CPU spiking? Are database queries slowing down? Are there 500 errors being returned? Who made that change at 3pm that broke everything?

Observability in production has three components: metrics (what's happening now), logs (what happened and what did it say), and audit trail (who did what). AWS has services for each.

Amazon CloudWatch — metrics, logs, and alarms

CloudWatch is AWS's central monitoring service. By default, every AWS service publishes metrics to CloudWatch automatically — EC2 CPU, RDS connections, ALB request count, error rates, and hundreds more. You don't have to configure anything to get started.

Metrics and Dashboards: You can build CloudWatch dashboards that show all your key metrics on a single screen. Create a dashboard showing ALB request count, EC2 CPU across your ASG, RDS connection count, and RDS query latency side by side. In an incident, you can see the whole picture in seconds.

Alarms: Set thresholds on any metric. If CPU stays above 80% for 5 minutes, trigger an alarm. Alarms can notify you via SNS (email, SMS, PagerDuty), trigger Auto Scaling actions, or invoke Lambda functions. This is how Auto Scaling knows when to scale: CloudWatch sees CPU, fires an alarm, the ASG reacts.

CloudWatch Logs: Your application's log output (console logs, error logs, access logs) can be streamed to CloudWatch Logs. You can then search across all instances simultaneously. Instead of SSH-ing into each server to look at a log file, you query CloudWatch Logs Insights with a SQL-like syntax: find all ERROR level logs in the last hour across all app servers. This is transformative for debugging production issues.

CloudWatch Container Insights / Application Insights: For more sophisticated setups, these give you automatic dashboards for containerised applications and common server-side stacks.

AWS CloudTrail — the audit log for your AWS account

The requirement: Something changed and the app broke. Who changed a Security Group? Who deleted an IAM policy? Who modified the RDS parameter group at 2pm?

The solution: AWS CloudTrail.

CloudTrail records every API call made in your AWS account — from the console, CLI, SDK, or any service acting on your behalf. Every action is logged: who did it, from what IP, at what time, and what the exact API call was. CloudTrail logs go to S3 and optionally to CloudWatch Logs for searching.

In a security incident, CloudTrail is your forensic record. In a misconfiguration incident, it tells you exactly what changed and who did it. It should be enabled in every AWS account from day one, with a CloudTrail trail writing to an S3 bucket that the ops team can't accidentally delete.

AWS X-Ray — distributed tracing

As your application grows, requests start touching multiple services — the ALB, app servers, RDS, maybe SQS, maybe Lambda. When a request takes 4 seconds, which service was slow? Was it the database query, an external API call, or compute in your application?

X-Ray instruments your code to trace the lifecycle of each request across all the services it touches. You see a waterfall chart showing exactly where time was spent. It's the difference between "the app is slow" and "the getUserPurchaseHistory database query takes 2.3 seconds, and it's called 12 times per request."

📊
CloudWatch Metrics

Real-time numbers from every AWS service. CPU, memory, latency, error rate, connections.

🚨
CloudWatch Alarms

Thresholds on metrics. Trigger Auto Scaling, SNS notifications, or Lambda on breaches.

📄
CloudWatch Logs

Centralised log storage. Query across all instances. Searchable, filterable, alarmed.

🔍
CloudTrail

Full audit log of every AWS API call. Who did what, when, from where.

🔗
X-Ray

Distributed request tracing. See exactly where time is spent across your services.

🛡️
GuardDuty

Threat detection. Analyses CloudTrail, VPC Flow Logs, and DNS to find suspicious behaviour automatically.

10

Supporting Services — S3 and Secrets Manager

Beyond the three tiers and their security/monitoring layers, two services come up in almost every production application.

Amazon S3 — object storage for everything

The requirement: I need to store user-uploaded files (profile photos, documents, videos). I also want to serve my frontend assets (React build output, CSS, JS) without running a web server. And I need somewhere to put my application's log archives, database backup exports, and deployment artefacts.

S3 (Simple Storage Service) is an object store — not a file system, not a database, but a flat store of objects addressed by a key. You put files in, you get them back by key. It scales to virtually unlimited storage with no provisioning. Pricing is per GB stored plus per request.

For frontend assets, S3 + CloudFront (AWS's CDN) is the standard pattern: upload your React build to S3, put CloudFront in front of it, and your static site is served from edge locations globally with sub-100ms latency, infinite scale, and essentially zero cost for storage.

AWS Secrets Manager — where credentials live

The requirement: My application needs to connect to the database. That requires a username and password. Where do I store that securely? I can't hard-code it, can't put it in an environment variable on disk, and can't store it in source control.

Secrets Manager stores secrets (database passwords, API keys, OAuth tokens) encrypted with KMS. Your application code calls Secrets Manager at runtime to fetch the credentials. Benefits:

11

The big picture — full production architecture

Let's put it all together. Here's what a production-grade 3-tier application looks like on AWS, with every service we've discussed placed in context:

Full production AWS architecture — 3-tier app
Complete production AWS architecture for a 3-tier application Full diagram. WAF and ACM sit above the VPC. Inside the VPC: ALB in public subnets, EC2 App Servers in an Auto Scaling Group across two AZs, RDS Primary and Standby in DB subnets, and a Read Replica. Outside the VPC: S3, CloudWatch, CloudTrail, and IAM as AWS account-level services. Users (HTTPS) AWS WAF + ACM (HTTPS) VPC 10.0.0.0/16 PUBLIC SUBNETS Application Load Balancer Multi-AZ · Health checks · SSL termination · Path routing NAT Gateway Outbound only PRIVATE SUBNETS PRIVATE SUBNET AZ-a PRIVATE SUBNET AZ-b AUTO SCALING GROUP (min 2 · max 10) EC2 (AZ-a) · IAM Role EC2 (AZ-a) · IAM Role EC2 (AZ-b) · IAM Role EC2 (AZ-b) · IAM Role RDS Primary AZ-a · Multi-AZ Auto backups · Encryption RDS Standby AZ-b · Sync replica Auto failover sync Read Replica Analytics · Async replication AWS ACCOUNT-LEVEL SERVICES (outside VPC) Amazon S3 Frontend assets Logs · Backups Secrets Manager DB credentials Auto-rotation CloudWatch Metrics · Logs Alarms · Dashboards CloudTrail API audit log Who did what, when IAM Roles · Policies Least privilege AWS Region · Multi-AZ deployment · All traffic encrypted in transit and at rest

Look at this diagram and trace a single user request:

  1. User makes an HTTPS request to your domain.
  2. DNS resolves to your ALB endpoint. Traffic passes through WAF — SQL injection attempts and bot traffic are blocked here.
  3. The ALB terminates TLS using the ACM certificate, performs a health check on your app servers, and forwards the request to a healthy instance.
  4. An EC2 app server in a private subnet receives the request. It fetches its DB credentials from Secrets Manager using its IAM Role (no hardcoded passwords).
  5. The app server queries the RDS Primary in the isolated DB subnet. The Security Group on RDS only allows connections from the app server's Security Group.
  6. The response flows back up the chain. All of this is logged to CloudWatch Logs. If CPU spikes, CloudWatch Alarms fire and the Auto Scaling Group launches more instances. Any AWS console/API actions are captured in CloudTrail.

Every service in this diagram exists because of a real requirement — not because AWS invented complexity for its own sake. The services are answers to questions: How do I isolate my network? How do I survive an AZ failure? How do I not store passwords in my code? How do I know when something breaks?


What comes next

This architecture covers the core of a production 3-tier app. As you go deeper, the next layers to explore are:

The pattern to internalise: Every time you add a service to this architecture, ask — what problem does this solve? What breaks if I don't have it? That question-first thinking is what separates engineers who understand cloud from engineers who just follow tutorials.
Mayank Pandey

About the Author

Mayank Pandey

AWS Community Hero and Cloud Architect with 15+ years of experience. AWS Solutions Architect Professional, FinOps Practitioner, and AWS Authorized Instructor. Creator of the KnowledgeIndia YouTube channel (80,000+ subscribers). Based in Melbourne, Australia.