Task Statement 3.2: Design high-performing and elastic compute solutions.
📘AWS Certified Solutions Architect – (SAA-C03)
1. What is Distributed Computing?
Distributed computing is a system where computing tasks are split across multiple servers or nodes, instead of relying on a single server. Each node performs part of the work, and they communicate with each other to produce the final result.
Key points for AWS exam:
- Distributed systems are scalable — you can add more nodes to handle more workload.
- They are fault-tolerant — if one node fails, others continue working.
- They rely on network connectivity to coordinate tasks.
IT example:
A large website serving thousands of requests per second uses multiple EC2 instances behind an Application Load Balancer. Each instance handles part of the traffic so no single server is overloaded.
2. AWS Global Infrastructure and Distributed Computing
AWS provides global infrastructure that supports distributed computing at scale. Understanding this helps you design high-performing and elastic solutions.
A. AWS Regions
- A Region is a geographic area (e.g.,
US East (N. Virginia)). - Each Region has multiple Availability Zones (AZs).
- AZs are isolated data centers with their own power, networking, and cooling.
- Using multiple AZs allows high availability and fault tolerance.
Exam Tip:
Design solutions that span multiple AZs to ensure your applications stay up if one AZ fails.
B. Availability Zones (AZs)
- AZs are connected via low-latency, high-bandwidth links.
- You can deploy EC2 instances, RDS databases, or EKS clusters across AZs.
- Distributed computing across AZs allows load balancing and failover.
Example in IT:
A distributed database like Amazon Aurora replicates across 2-3 AZs. If one AZ goes down, the database automatically fails over to another AZ without downtime.
C. Edge Locations
- Edge locations are smaller data centers closer to users.
- Primarily used by Amazon CloudFront and AWS Global Accelerator.
- They cache content or route traffic to improve performance and reduce latency.
IT example:
A global SaaS platform uses CloudFront to cache static content (images, scripts) at edge locations, reducing load on the main servers and improving response time for users worldwide.
3. Elastic Compute on AWS
Elasticity means your system can automatically adjust capacity to match demand.
A. Amazon EC2 Auto Scaling
- Automatically adds or removes EC2 instances based on traffic or CPU usage.
- Ensures performance during peak load and cost efficiency during low load.
IT example:
A batch-processing job uses EC2 Auto Scaling to spin up more instances during heavy data ingestion, then terminates them when processing is done.
B. AWS Lambda
- Serverless compute — you don’t manage servers.
- Scales automatically based on requests.
- Supports microservices architectures and event-driven workloads.
IT example:
Processing user uploads in S3: every new file triggers a Lambda function to process and store metadata.
C. Amazon ECS and Fargate
- ECS (Elastic Container Service) allows you to run Docker containers on a cluster of EC2 instances.
- Fargate is serverless container compute — no EC2 management needed.
- Supports distributed applications that scale horizontally.
IT example:
A containerized API service running on ECS with Fargate scales automatically as API requests increase.
4. Distributed Storage & Compute Integration
AWS distributed storage services complement compute for high performance:
- Amazon S3: Stores massive amounts of data across multiple AZs.
- Amazon EFS (Elastic File System): Provides shared storage across multiple EC2 instances.
- Amazon FSx: High-performance file systems for specialized workloads.
- Amazon DynamoDB: Globally distributed, low-latency NoSQL database.
Key Exam Point:
When designing distributed applications, use storage and compute services together to maintain scalability, low latency, and fault tolerance.
5. Key AWS Features Supporting Distributed Computing
| Feature | Purpose | IT Example |
|---|---|---|
| Elastic Load Balancer (ELB) | Distributes traffic across multiple instances | Web API requests handled by EC2 across AZs |
| Amazon CloudFront | Caches content globally at edge locations | Static website content served quickly worldwide |
| Auto Scaling Groups | Automatically scale EC2 instances | Data processing jobs that grow/shrink dynamically |
| AWS Global Accelerator | Routes traffic via optimal path | Improves latency for multi-region apps |
| Amazon Aurora Global Database | Replicates DB across regions | Low-latency global app with disaster recovery |
6. Best Practices for High-Performing Distributed Applications
- Use multiple AZs for fault tolerance.
- Use regions strategically if your app serves global users.
- Leverage edge services (CloudFront, Global Accelerator) to reduce latency.
- Enable auto scaling to handle unpredictable workloads.
- Choose appropriate compute type:
- Lambda for event-driven workloads.
- ECS/Fargate for microservices.
- EC2 for long-running or specialized workloads.
- Use distributed storage for high availability and consistency.
- Monitor performance with Amazon CloudWatch and adjust scaling policies.
Summary for the Exam
- Distributed computing splits work across multiple nodes to scale and survive failures.
- AWS global infrastructure (Regions, AZs, Edge locations) supports high performance and low-latency access.
- Edge services like CloudFront and Global Accelerator bring content closer to users.
- Elastic compute (EC2 Auto Scaling, Lambda, ECS/Fargate) adjusts automatically to workload.
- Combining distributed compute and storage ensures scalable, resilient, and high-performing architectures.
Exam Tip:
AWS often asks scenario-based questions like:
“You need a web app that handles millions of users worldwide with minimal latency and automatic scaling. Which services do you use?”
Answer would involve multi-AZ EC2/ECS/Fargate + ELB + CloudFront + auto scaling.
