Task Statement 4.3: Design cost-optimized database solutions.
📘AWS Certified Solutions Architect – (SAA-C03)
✅ What is Caching?
Caching is the process of storing frequently accessed data in a fast storage layer so that future requests can be served quickly and cheaply.
👉 Instead of repeatedly querying a database (which is slower and more expensive), the application retrieves data from a cache.
🎯 Why Caching is Important (Exam Focus)
Caching helps:
- ✅ Reduce database load
- ✅ Improve application performance (low latency)
- ✅ Reduce cost (fewer database reads)
- ✅ Handle high traffic efficiently
👉 In the exam, caching is often the best answer for cost optimization + performance improvement.
🔹 Types of Caching Strategies
You must understand these core strategies:
1. Cache-Aside (Lazy Loading)
📌 How it works:
- Application checks cache first
- If data is found → return it (cache hit)
- If not found → fetch from database (cache miss)
- Store result in cache
- Return data
📌 Key Points:
- Most commonly used strategy
- Cache is only filled when needed
- Simple to implement
📌 Pros:
- Reduces unnecessary caching
- Only frequently used data is cached
📌 Cons:
- First request is slower (cache miss)
- Data may become stale
📌 Exam Tip:
👉 If you see:
- “Load data into cache only when requested”
- “Improve performance with minimal changes”
➡️ Answer: Cache-aside
2. Write-Through Cache
📌 How it works:
- Application writes data to cache
- Cache automatically writes to database
📌 Key Points:
- Cache and database are always in sync
- No stale data
📌 Pros:
- Strong data consistency
- Cache always up-to-date
📌 Cons:
- Higher write latency
- Writes happen twice (cache + DB)
📌 Exam Tip:
👉 If question says:
- “Ensure data consistency”
- “No stale data allowed”
➡️ Answer: Write-through
3. Write-Back (Write-Behind)
📌 How it works:
- Application writes data to cache
- Cache writes to database later (asynchronously)
📌 Key Points:
- Faster writes
- Database updated in background
📌 Pros:
- Very high performance for writes
- Reduced database load
📌 Cons:
- Risk of data loss if cache fails
- Temporary inconsistency
📌 Exam Tip:
👉 If question says:
- “High write performance”
- “Eventual consistency acceptable”
➡️ Answer: Write-back
4. Read-Through Cache
📌 How it works:
- Cache itself retrieves data from database when needed
📌 Key Points:
- Application does not directly query DB
- Cache manages data fetching
📌 Pros:
- Simplifies application logic
📌 Cons:
- Less control over caching behavior
🔹 AWS Caching Services
1. Amazon ElastiCache (Most Important)
📌 What it is:
A fully managed in-memory caching service.
📌 Supported engines:
- Redis
- Memcached
🔸 Redis (Exam Favorite)
Use Redis when:
- Need high availability
- Need replication
- Need persistence
- Need advanced features
Features:
- Multi-AZ replication
- Automatic failover
- Backup & restore
🔸 Memcached
Use Memcached when:
- Need simple caching
- Need very fast performance
- No need for replication or persistence
📌 Exam Comparison:
| Feature | Redis | Memcached |
|---|---|---|
| Persistence | Yes | No |
| Replication | Yes | No |
| Multi-AZ | Yes | No |
| Complexity | Higher | Simple |
👉 Exam rule:
- Advanced + reliable → Redis
- Simple + fast → Memcached
2. Amazon CloudFront (Edge Caching)
📌 What it does:
Caches content at edge locations close to users.
📌 Use cases:
- Static content (images, CSS, JS)
- API responses
📌 Benefit:
- Reduces load on backend servers
- Improves global performance
3. DAX (DynamoDB Accelerator)
📌 What it is:
- In-memory cache for DynamoDB
📌 Benefit:
- Microsecond latency
- Reduces DynamoDB read cost
👉 Exam keyword:
- “Improve DynamoDB read performance” → DAX
🔹 Cache Invalidation (Very Important)
📌 What is it?
Removing or updating outdated data in cache.
🔸 Common Methods:
1. TTL (Time-To-Live)
- Cache expires after a set time
- Simple and widely used
👉 Exam: “Data can be slightly outdated” → use TTL
2. Manual Invalidation
- Application deletes cache when data changes
3. Event-Based Invalidation
- Cache updated when database changes
🔹 Caching Patterns in AWS Architecture
1. Database Query Caching
- Cache results of frequent queries
- Reduces database reads
2. Session Caching
- Store user session data in cache
- Improves scalability
3. API Response Caching
- Cache API outputs
- Reduces backend processing
4. Object Caching
- Store computed results or objects
🔹 Cost Optimization with Caching (Exam Critical)
Caching reduces cost by:
- ❌ Fewer database reads → lower RDS/DynamoDB cost
- ❌ Reduced CPU usage
- ❌ Smaller database instance required
👉 Example exam logic:
- High read traffic + high DB cost → Use caching
🔹 When to Use Caching (Exam Scenarios)
Use caching when:
- Data is read frequently
- Data changes infrequently
- Database is under heavy load
- Need low latency
🔹 When NOT to Use Caching
Avoid caching when:
- Data changes very frequently
- Strong consistency is required
- Data is rarely accessed
🔹 Common Exam Scenarios & Answers
🔸 Scenario 1:
“Application reads same data frequently and DB cost is high”
✅ Answer: Use ElastiCache
🔸 Scenario 2:
“Need sub-millisecond latency for DynamoDB”
✅ Answer: Use DAX
🔸 Scenario 3:
“Global users need faster content delivery”
✅ Answer: Use CloudFront
🔸 Scenario 4:
“Need high availability and replication in cache”
✅ Answer: Use Redis
🔸 Scenario 5:
“Simple caching with lowest cost”
✅ Answer: Use Memcached
🔹 Key Exam Tips (Must Remember)
- Cache reduces cost + latency
- ElastiCache is the main AWS caching service
- Redis = advanced features
- Memcached = simple, fast
- DAX = DynamoDB caching
- TTL = easiest invalidation method
- Cache-aside = most common strategy
✅ Final Summary
Caching is one of the most important cost optimization techniques in AWS.
It helps:
- Improve performance
- Reduce database load
- Save money
For the exam:
- Know caching strategies
- Know when to use Redis vs Memcached
- Know DAX and CloudFront
- Understand cache invalidation
