Task Statement 1.3: Design solutions that integrate load balancing to meet high
availability, scalability, and security requirements.
📘AWS Certified Advanced Networking – Specialty
In AWS architectures, load balancers distribute incoming traffic across multiple targets such as EC2 instances, containers, or IP addresses. Proper configuration of load balancers ensures:
- High availability – applications remain accessible even if some resources fail.
- Scalability – the system can handle increasing traffic.
- Security – the infrastructure protects backend systems and maintains controlled access.
AWS provides several load balancers through Elastic Load Balancing, including:
- Application Load Balancer (ALB) – Layer 7 (HTTP/HTTPS)
- Network Load Balancer (NLB) – Layer 4 (TCP/UDP/TLS)
- Gateway Load Balancer (GWLB) – for security appliances
To optimize traffic distribution and reliability, AWS load balancers provide several configuration options.
1. Proxy Protocol
What is Proxy Protocol?
Proxy Protocol is a feature that allows a load balancer to send the original client connection information to backend servers.
Normally, when a client connects through a load balancer, the backend server only sees the load balancer’s IP address, not the real client IP address.
Proxy Protocol solves this by adding a header containing client connection information.
This header includes:
- Client IP address
- Client port
- Destination IP address
- Destination port
- Protocol information
Why Proxy Protocol Is Important
Backend systems often need to know the original client IP for:
- Logging
- Security monitoring
- Access control policies
- Rate limiting
- Fraud detection
Without Proxy Protocol, the backend server would record the load balancer IP instead of the real user IP.
How It Works
- A client sends a request to the load balancer.
- The load balancer forwards the request to the backend target.
- Before the actual data, the load balancer inserts a Proxy Protocol header.
- The backend server reads the header and identifies the original client.
Where Proxy Protocol Is Used in AWS
Proxy Protocol is typically used with:
- Network Load Balancer
- Legacy Classic Load Balancer
It is commonly used when:
- Backend applications require original source IP visibility
- Security tools analyze client connections
- Logging systems need accurate client information
Important Exam Points
- Proxy Protocol preserves client connection metadata.
- Backend applications must support parsing Proxy Protocol headers.
- Mainly used with Layer 4 load balancers.
2. Cross-Zone Load Balancing
What Is Cross-Zone Load Balancing?
Cross-Zone Load Balancing allows a load balancer to distribute traffic evenly across all registered targets in all Availability Zones.
Normally, traffic arriving at a load balancer node is distributed only to targets within the same Availability Zone.
With cross-zone load balancing enabled, the load balancer can send traffic to targets in any Availability Zone.
Why Cross-Zone Load Balancing Is Important
Traffic across Availability Zones may not always be evenly distributed. Without cross-zone balancing, some servers may become overloaded while others remain underutilized.
Cross-zone load balancing ensures:
- Even traffic distribution
- Better resource utilization
- Improved scalability
Example in an IT Environment
A web application runs across:
- 2 EC2 instances in Availability Zone A
- 6 EC2 instances in Availability Zone B
If cross-zone load balancing is disabled, each zone only handles the traffic that reaches it.
This means:
- AZ A instances receive heavy load
- AZ B instances remain underused
If cross-zone load balancing is enabled, the load balancer distributes traffic across all 8 instances evenly, regardless of which zone receives the request.
AWS Behavior
| Load Balancer Type | Default Cross-Zone Behavior |
|---|---|
| Application Load Balancer | Enabled by default |
| Network Load Balancer | Disabled by default (can be enabled) |
| Classic Load Balancer | Optional |
Benefits
- Balanced traffic distribution
- Better fault tolerance
- Higher application performance
Possible Consideration
When enabled, cross-zone load balancing may cause inter-Availability Zone data transfer, which could result in additional network charges.
3. Session Affinity (Sticky Sessions)
What Is Session Affinity?
Session affinity, also called sticky sessions, ensures that requests from the same client are sent to the same backend server.
This is important for applications that store session information locally on the server.
Why Sticky Sessions Are Needed
Some applications maintain user session data on a specific server.
Session data may include:
- Authentication tokens
- User preferences
- Shopping cart information
- Temporary user data
If requests are distributed randomly to different servers, the application may lose session data.
Sticky sessions ensure the client continues communicating with the same backend instance.
How Sticky Sessions Work
When sticky sessions are enabled:
- The load balancer sends the first request to a backend instance.
- The load balancer creates a session cookie.
- The cookie identifies the backend instance.
- Future requests with that cookie are routed to the same server.
Types of Sticky Sessions in AWS
Sticky sessions can be configured using:
Load Balancer Generated Cookies
The load balancer creates a cookie.
Examples:
- ALB cookie
- Classic ELB cookie
Application Generated Cookies
The application generates the cookie, and the load balancer uses it to maintain session routing.
Example in an IT Environment
An enterprise web portal deployed on multiple EC2 servers stores login sessions locally.
If a user logs in through Server A, sticky sessions ensure all subsequent requests from that user are always sent to Server A.
Without sticky sessions, requests might reach Server B or C, causing session authentication problems.
Important Exam Points
- Sticky sessions maintain client-to-server consistency.
- Used for applications with stateful sessions.
- Not recommended for stateless architectures.
Modern cloud architectures often use external session storage such as **Amazon ElastiCache or **Amazon DynamoDB instead of sticky sessions.
4. Routing Algorithms
Routing algorithms determine how the load balancer decides which backend target receives a request.
Different algorithms improve performance depending on application requirements.
Round Robin
Round Robin distributes traffic evenly in sequence across all targets.
Example request distribution:
Server A → Server B → Server C → Server A → Server B
Characteristics
- Equal distribution
- Simple implementation
- Works well when all servers have similar capacity
Least Outstanding Requests
This algorithm sends traffic to the server currently handling the fewest active requests.
Characteristics
- Helps balance uneven workloads
- Useful when request processing time varies
Example:
If Server A has 10 active requests and Server B has 3, the next request is sent to Server B.
Hash-Based Routing
In some cases, the load balancer uses hashing algorithms to determine the target server.
Common hash keys include:
- Source IP
- Session cookie
- Request parameters
This method ensures consistent request routing.
Flow Hash (Network Load Balancer)
**Network Load Balancer uses a flow hash algorithm based on:
- Source IP
- Destination IP
- Source port
- Destination port
- Protocol
This ensures all packets in the same connection go to the same backend target.
Important Exam Points
| Algorithm | Behavior | Use Case |
|---|---|---|
| Round Robin | Sequential distribution | Similar server capacity |
| Least Outstanding Requests | Sends to least busy server | Variable request workload |
| Hash-based routing | Deterministic routing | Session consistency |
| Flow Hash | Maintains connection integrity | TCP/UDP traffic |
Security Considerations
Proper load balancer configuration also improves security.
Key security benefits include:
- Hiding backend server IP addresses
- Enforcing TLS encryption
- Integrating with AWS WAF
- Logging and monitoring through Amazon CloudWatch
Proxy Protocol also helps security tools identify real client IP addresses.
Best Practices for AWS Load Balancer Configuration
- Enable cross-zone load balancing for even traffic distribution.
- Use sticky sessions only when required.
- Use Proxy Protocol when backend systems require the real client IP.
- Choose routing algorithms appropriate for the workload.
- Monitor load balancer performance using CloudWatch metrics.
Key Exam Takeaways
For the AWS Advanced Networking Specialty exam, remember:
- Proxy Protocol preserves original client connection information.
- Cross-Zone Load Balancing distributes traffic evenly across all Availability Zones.
- Sticky Sessions maintain client-to-server session persistence.
- Routing algorithms determine how traffic is distributed among targets.
- Different AWS load balancers support different configuration capabilities.
Understanding these options is essential when designing scalable, highly available AWS networking architectures.
