Skip to main content
  1. Home
  2. >
  3. AWS
  4. >
  5. SAA-C03
  6. >
  7. AWS SAA-C03 Exam Scenarios
  8. >
  9. Auto-Scaling Shared File Storage Trade-offs | SAA-C03

Auto-Scaling Shared File Storage Trade-offs | SAA-C03

Jeff Taakey
Author
Jeff Taakey
21+ Year Enterprise Architect | Multi-Cloud Architect & Strategist.

While preparing for the AWS SAA-C03, many candidates get confused by storage selection for auto-scaling architectures. In the real world, this is fundamentally a decision about shared access patterns vs. operational complexity. Let’s drill into a simulated scenario.

The Scenario
#

GlobalMedia Analytics operates a video processing platform on AWS that generates research reports from media content. The platform produces output files ranging from 50 GB for short-form content analysis to 300 TB for full-length documentary processing. The engineering team has identified three critical requirements:

  1. All compute instances must access files using standard POSIX file system operations (the legacy codebase relies on native file I/O)
  2. Processing demand fluctuates dramatically鈥攆rom 5 instances during off-peak hours to 200+ instances during campaign launches
  3. The infrastructure team has a hiring freeze and cannot dedicate engineers to storage management

Key Requirements
#

Design a storage solution that supports automatic scaling, maintains high availability across multiple data centers, requires minimal operational intervention, and preserves standard file system semantics.

The Options
#

  • A) Containerize the application on Amazon ECS and use Amazon S3 for storage
  • B) Containerize the application on Amazon EKS and use Amazon EBS for storage
  • C) Deploy the application on EC2 instances in a Multi-AZ Auto Scaling group and use Amazon EFS for storage
  • D) Deploy the application on EC2 instances in a Multi-AZ Auto Scaling group and use Amazon EBS for storage

Correct Answer
#

Option C: Deploy the application on EC2 instances in a Multi-AZ Auto Scaling group and use Amazon EFS for storage.

Step-by-Step Winning Logic
#

This solution represents the optimal trade-off between operational simplicity and architectural requirements:

  1. Standard File System Semantics: Amazon EFS provides full POSIX compliance, allowing the legacy application to use native file operations without code changes (mount points, directory structures, file locking)

  2. Elastic Scalability: EFS automatically scales from gigabytes to petabytes without provisioning鈥攏o capacity planning required as file sizes grow from 50 GB to 300 TB

  3. Multi-Instance Concurrent Access: The critical advantage鈥擡FS is a shared file system that multiple EC2 instances can mount simultaneously, essential for Auto Scaling groups where instance count fluctuates from 5 to 200+

  4. High Availability Built-In: EFS automatically replicates across multiple Availability Zones within a region, eliminating the need for custom failover logic

  5. Zero Operational Overhead: No volume management, no snapshot scheduling, no capacity monitoring鈥攁ligns perfectly with the hiring freeze constraint


馃拵 The Architect’s Deep Dive: Why Options Fail
#

The Traps (Distractor Analysis)
#

Why not Option A (ECS + S3)?

  • File System Mismatch: S3 is an object store, not a file system. While you can use tools like s3fs-fuse or Mountpoint for S3, they don’t provide full POSIX compliance (no file locking, atomic renames, or append operations)
  • Code Refactoring Required: The application would need significant changes to use S3 API calls instead of native file I/O
  • Operational Complexity: Managing state in containers with object storage requires additional orchestration logic

Why not Option B (EKS + EBS)?

  • Shared Access Limitation: EBS volumes can only attach to a single EC2 instance (except Multi-Attach for specific use cases with limitations). Auto Scaling to 200 instances would require complex volume cloning or network file system layers
  • Kubernetes Overhead: Introducing EKS adds significant operational complexity (cluster management, pod scheduling, persistent volume claims) without solving the core storage sharing problem
  • Cost and Expertise: EKS control plane costs ($0.10/hour = ~$73/month) plus the need for Kubernetes expertise contradicts “minimal operational overhead”

Why not Option D (EC2 + EBS)?

  • The Fatal Flaw: EBS cannot be shared across multiple instances in an Auto Scaling group. Each instance would have its own isolated EBS volume
  • Data Synchronization Nightmare: You’d need to build custom replication logic (rsync scripts, third-party tools, or distributed file systems like GlusterFS)
  • High Availability Gaps: EBS volumes are AZ-specific. Cross-AZ failover requires snapshots and restore processes, adding complexity
  • Operational Burden: Manual snapshot management, volume resizing, and capacity planning directly conflict with the “minimal operations” requirement

馃拵 Professional Decision Matrix

This SAA-C03 professional section is locked.
Free beta access reveals the exam logic.

100% Free Beta Access

The Architect Blueprint
#

graph TD
    Users([End Users]) --> ALB[Application Load Balancer]
    ALB --> ASG[Auto Scaling Group
Multi-AZ
5-200 Instances] ASG --> EC2_AZ1[EC2 Instances
Availability Zone A] ASG --> EC2_AZ2[EC2 Instances
Availability Zone B] EC2_AZ1 -.NFS Mount.-> EFS[Amazon EFS
Shared File System
Auto-Scaling Storage] EC2_AZ2 -.NFS Mount.-> EFS EFS --> AZ1_Replica[EFS Storage
AZ-A Replica] EFS --> AZ2_Replica[EFS Storage
AZ-B Replica] style EFS fill:#FF9900,stroke:#232F3E,color:#fff style ASG fill:#3F8624,stroke:#232F3E,color:#fff style ALB fill:#8C4FFF,stroke:#232F3E,color:#fff

馃拵 Professional Decision Matrix

This SAA-C03 professional section is locked.
Free beta access reveals the exam logic.

100% Free Beta Access

Diagram Note: All EC2 instances across multiple AZs mount the same EFS file system via NFS, enabling seamless data sharing as the Auto Scaling group expands or contracts, while EFS automatically handles cross-AZ replication and scaling.

Real-World Practitioner Insight
#

Exam Rule
#

“For the SAA-C03 exam, when you see ‘standard file system’ + ‘auto-scaling compute’ + ‘minimal operations’, immediately think Amazon EFS. EBS is for single-instance block storage; S3 requires API-based access.”

Real World
#

“In production, we’d add these considerations:

  1. Performance Tiers: Use EFS Bursting mode for unpredictable workloads or Provisioned Throughput if you know you need sustained high performance (important for 300 TB files)

  2. Lifecycle Policies: Enable EFS Infrequent Access (IA) storage class to automatically move files not accessed for 30/60/90 days, reducing costs by up to 92% (critical for archival of completed reports)

  3. Cost Monitoring: At scale, EFS can become expensive (Standard storage ~$0.30/GB-month vs. EBS gp3 ~$0.08/GB-month). For 300 TB, that’s $90,000/month vs. $24,000/month鈥攂ut remember, the EBS option requires custom engineering that could cost far more

  4. Hybrid Approach: Some teams use EFS for active processing and S3 Intelligent-Tiering for long-term storage, with Lambda functions moving completed files post-processing

  5. VPC Design: In multi-account environments, use EFS Access Points and VPC Peering/Transit Gateway to share file systems across accounts while maintaining security boundaries”

馃拵 Professional Decision Matrix

This SAA-C03 professional section is locked.
Free beta access reveals the exam logic.

100% Free Beta Access