Skip to main content
  1. Home
  2. >
  3. AWS
  4. >
  5. SAA-C03
  6. >
  7. AWS SAA-C03 Exam Scenarios
  8. >
  9. Serverless Decoupling for Resiliency Trade-offs | SAA-C03

Serverless Decoupling for Resiliency Trade-offs | SAA-C03

Jeff Taakey
Author
Jeff Taakey
21+ Year Enterprise Architect | Multi-Cloud Architect & Strategist.

While preparing for the AWS SAA exam, many candidates get confused by application resiliency and service integration. In the real world, this is fundamentally a decision about data durability versus operational complexity during database service downtime. Let’s drill into a simulated scenario.

The Scenario
#

NovaRetail, a fast-growing e-commerce startup, has built an order processing application using Amazon API Gateway to expose RESTful endpoints. Incoming customer orders trigger AWS Lambda functions that perform business logic and write customer order data into an Amazon Aurora MySQL database.

During periodic maintenance windows, Aurora undergoes version upgrades during which it temporarily refuses new connections. This causes some Lambda invocations to fail when they try to write data, resulting in lost customer order records and unhappy customers.

The Solutions Architect must design an improved architecture that ensures no customer order data is lost during database upgrade windows, while minimizing latency increases and operational overhead.

Key Requirements
#

Ensure durable storage of customer order data generated during Aurora maintenance outages without losing any records. Preferably decouple ingestion from direct database writes to improve resiliency.

The Options
#

  • A) Deploy an Amazon RDS Proxy between Lambda and Aurora, configuring Lambda to connect via the proxy to improve failover and connection management.
  • B) Increase the Lambda maximum execution timeout. Add retry logic in Lambda code to resubmit failed database writes if Aurora connection attempts fail.
  • C) Store customer order data temporarily in Lambda local ephemeral storage. Use a separate Lambda function to scan local storage and write data to Aurora after upgrades.
  • D) Persist customer order data via an Amazon Simple Queue Service (SQS) FIFO queue. Create a dedicated Lambda function to poll the queue and write data asynchronously to Aurora.

Correct Answer
#

Option D.

Step-by-Step Winning Logic
#

Persisting incoming customer order data into an Amazon SQS FIFO queue decouples the ingestion layer from the Aurora database. This design guarantees no order data is lost during Aurora upgrade windows because the messages remain durable in the queue. Async Lambda consumers reliably drain the queue and write data once the database is available again.

This approach balances resiliency and operational simplicity with minimal incremental cost. SQS charges scale with message volume but are generally low cost. The architecture also gains elasticity and fault-tolerance without increasing Lambda execution durations or requiring complex local state management.


馃拵 The Architect’s Deep Dive: Why Options Fail
#

The Traps (Distractor Analysis)
#

  • Option A: RDS Proxy helps manage connection pooling and failover but does not solve the fundamental issue that Aurora refuses connections during upgrades. Operations might still fail or be delayed, risking data loss.
  • Option B: Extending Lambda timeout and retrying failed writes increases invocation cost and delays, and Lambda retries alone cannot guarantee durability if the database remains unavailable or errors persist.
  • Option C: Using ephemeral Lambda local storage is unreliable because Lambda storage is non-persistent and limited in size. Furthermore, scanning local storage across multiple Lambda instances is operationally complex and prone to data consistency issues.

馃拵 Professional Decision Matrix

This SAA-C03 professional section is locked.
Free beta access reveals the exam logic.

100% Free Beta Access

The Architect Blueprint
#

graph TD
    API_User([Customer Request]) --> API_Gateway[API Gateway]
    API_Gateway --> Lambda_Ingest[Lambda: Ingest Order Data]
    Lambda_Ingest --> SQS_Queue[SQS FIFO Queue]
    SQS_Queue --> Lambda_Processor[Lambda: Process Queue]
    Lambda_Processor --> Aurora_DB[Aurora MySQL DB]

馃拵 Professional Decision Matrix

This SAA-C03 professional section is locked.
Free beta access reveals the exam logic.

100% Free Beta Access

Diagram Note: The ingestion Lambda persists customer orders to SQS for durability. A dedicated Lambda asynchronously processes SQS messages and writes to Aurora, allowing database upgrades without data loss.

Real-World Practitioner Insight
#

Exam Rule
#

For the exam, always pick Amazon SQS when you see a scenario requiring durability and decoupling during database downtime.

Real World
#

In reality, architects sometimes combine SQS with DynamoDB as a durable buffer or use Amazon Kinesis for streaming scenarios. Observability and alerting around queue backlog and Lambda processing failures must also be factored in operationally.

馃拵 Professional Decision Matrix

This SAA-C03 professional section is locked.
Free beta access reveals the exam logic.

100% Free Beta Access