Design principles for microservices (for example, stateless workloads compared with stateful workloads)

Task Statement 2.1: Design scalable and loosely coupled architectures.

📘AWS Certified Solutions Architect – (SAA-C03)


1. What Are Microservices?

Microservices architecture is a design approach where an application is divided into small, independent services.

Each service:

  • Has a single responsibility
  • Runs independently
  • Can be deployed separately
  • Communicates with other services using APIs or messaging

In AWS environments, microservices commonly run on:

  • Amazon EC2
  • AWS Lambda
  • Amazon ECS
  • Amazon EKS

2. Why Microservices Matter for the SAA-C03 Exam

AWS strongly promotes:

  • Scalability
  • High availability
  • Loose coupling
  • Independent deployments
  • Fault isolation

The exam will test whether you can:

  • Choose stateless vs stateful correctly
  • Select proper AWS services
  • Design systems that scale independently
  • Avoid tight coupling

3. Core Design Principles of Microservices

You must understand these principles clearly:


Principle 1: Single Responsibility

Each microservice should do one job only.

Example (IT-based):

  • User authentication service
  • Order processing service
  • Payment processing service
  • Notification service

If one service fails, others should continue working.


Principle 2: Loose Coupling

Services should not depend heavily on each other.

Avoid:

  • Direct database sharing
  • Hard-coded connections
  • Synchronous blocking calls everywhere

Instead, use:

  • APIs
  • Event-driven communication
  • Message queues like Amazon SQS
  • Event bus like Amazon EventBridge
  • Streaming with Amazon Kinesis

Loose coupling ensures:

  • Independent scaling
  • Independent failure handling
  • Easier updates

Principle 3: Independent Scalability

Each microservice must scale independently.

For example:

  • Login service may need 2 instances
  • Reporting service may need 10 instances

Use:

  • Auto Scaling
  • Application Load Balancer
  • Amazon ECS
  • AWS Lambda

Principle 4: Fault Isolation

If one service fails:

  • The whole system should NOT go down.

Use:

  • Timeouts
  • Retries
  • Circuit breaker pattern
  • Dead-letter queues (DLQ) with Amazon SQS

4. Stateless vs Stateful Workloads (Very Important for Exam)

This is a frequently tested topic.


What Is a Stateless Workload?

A stateless workload does NOT store client session data locally.

Each request:

  • Is independent
  • Contains all required information
  • Can be handled by any server instance

Example (IT Environment)

A web API:

  • Receives request
  • Processes request
  • Stores data in database
  • Returns response
  • Does NOT store session in memory

Session data stored in:

  • Amazon DynamoDB
  • Amazon RDS
  • Amazon ElastiCache

Why Stateless Is Preferred in AWS?

Stateless workloads:

  • Scale easily
  • Work well with load balancers
  • Are highly available
  • Are easy to replace

Perfect for:

  • AWS Lambda
  • Amazon ECS
  • Amazon EC2 behind load balancer

What Is a Stateful Workload?

A stateful workload stores session or data locally.

If the instance is lost:

  • Data is lost
  • Session breaks

Examples:

  • In-memory session storage
  • Databases
  • File systems
  • Caching engines

When Stateful Is Necessary

Some applications must maintain state:

  • Databases
  • File processing systems
  • Real-time streaming engines

AWS stateful services:

  • Amazon RDS
  • Amazon DynamoDB
  • Amazon ElastiCache
  • Amazon EFS

5. Key Differences: Stateless vs Stateful

FeatureStatelessStateful
Stores session locally?NoYes
Easy to scale?Very easyMore complex
Works with load balancer?YesNeeds sticky sessions
Failure impactLowHigher
Best for microservices?YesOnly when required

6. Sticky Sessions (Exam Alert)

When using load balancers:

If state is stored locally, you need sticky sessions.

Sticky sessions:

  • Send same user to same instance
  • Used with Application Load Balancer

BUT:

Sticky sessions reduce scalability and fault tolerance.

AWS exam prefers:

Move session state to database or cache instead of using sticky sessions.


7. How to Design Proper Microservices in AWS

For SAA-C03, remember this design pattern:

1. Stateless Compute Layer

  • Amazon EC2
  • AWS Lambda
  • Amazon ECS

2. Separate State Layer

  • Amazon RDS
  • Amazon DynamoDB
  • Amazon ElastiCache

3. Event-Based Communication

  • Amazon SQS
  • Amazon SNS
  • Amazon EventBridge

8. Synchronous vs Asynchronous Communication

Synchronous

  • Direct API calls
  • One service waits for another
  • Tighter coupling

Asynchronous (Preferred)

  • Message queues
  • Events
  • No waiting

Use:

  • Amazon SQS
  • Amazon SNS
  • Amazon EventBridge

Exam tip:

If question mentions decoupling, resilience, buffering → choose SQS or EventBridge.


9. Database per Microservice (Important Concept)

Each microservice should:

  • Have its own database
  • Not share database tables directly

Why?

  • Avoid tight coupling
  • Independent scaling
  • Independent deployment

10. Designing for Scalability (Exam Focus)

To design scalable microservices:

✔ Use stateless compute
✔ Externalize state
✔ Use Auto Scaling
✔ Use load balancers
✔ Use asynchronous messaging
✔ Avoid shared databases
✔ Avoid sticky sessions


11. Common Exam Scenarios

Scenario 1:

Application must handle traffic spikes →
Solution: Stateless app + Auto Scaling + Load Balancer

Scenario 2:

Order processing must not fail if payment service is down →
Solution: Use SQS queue between services

Scenario 3:

Users losing session when instance terminates →
Solution: Store session in DynamoDB or ElastiCache

Scenario 4:

System tightly coupled →
Solution: Use event-driven architecture


12. Serverless Microservices

Serverless is naturally stateless.

Use:

  • AWS Lambda
  • Amazon API Gateway
  • Amazon DynamoDB

Benefits:

  • Automatic scaling
  • No server management
  • Highly available

13. Final Exam Key Takeaways

For SAA-C03, ALWAYS remember:

  1. Microservices = small independent services
  2. Prefer stateless compute
  3. Store state externally
  4. Use asynchronous messaging
  5. Design for failure
  6. Avoid tight coupling
  7. Each service scales independently
  8. Use managed AWS services whenever possible

Quick Memory Trick for Exam

S.L.E.A.F.

  • S → Stateless
  • L → Loosely coupled
  • E → Event-driven
  • A → Auto scaling
  • F → Fault isolation

If an answer follows this pattern, it is usually correct.

Buy Me a Coffee