Skip to main content
  1. Home
  2. >
  3. AWS
  4. >
  5. SAP-C02
  6. >
  7. AWS SAP-C02 Exam Scenarios
  8. >
  9. Cross-Account S3 Replication IAM Trust Trade-off | SAP-C02

Cross-Account S3 Replication IAM Trust Trade-off | SAP-C02

Jeff Taakey
Author
Jeff Taakey
21+ Year Enterprise Architect | Multi-Cloud Architect & Strategist.

While preparing for the AWS SAP-C02, many candidates get confused by cross-account S3 access patterns and the dual-authorization model. In the real world, this is fundamentally a decision about execution context vs. policy attachment point combined with least privilege and operational simplicity. Let’s drill into a simulated scenario.

The Scenario
#

GlobalDataOps Inc. is undergoing an organizational restructuring that requires migrating 850 TB of historical analytics data from their legacy AWS account (Account A: 111111111111) to a newly created data analytics account (Account B: 222222222222). The Chief Data Officer has mandated that this migration must be executed using AWS CLI to maintain audit trails and enable scriptable automation for future similar migrations.

The infrastructure team has already created an empty S3 bucket (globaldataops-analytics-target) in Account B. The source bucket (globaldataops-legacy-warehouse) in Account A contains objects with existing ACLs that must be preserved during transfer.

Key Requirements
#

Design a cross-account S3 data migration solution that:

  • Uses AWS CLI (aws s3 sync) for execution
  • Maintains object ACL integrity
  • Follows AWS IAM best practices for cross-account access
  • Minimizes operational complexity for one-time migration

The Options
#

Select THREE correct steps that, when combined, will successfully enable the data migration:

  • A) Create a bucket policy on the destination bucket that grants the source bucket permissions to list contents, upload objects, and set object ACLs, then attach this policy to the destination bucket.

  • B) Create a bucket policy on the source bucket that grants the destination account’s users permissions to list contents and read objects, then attach this policy to the source bucket.

  • C) In the source account, create an IAM policy that allows the source account user to list and read from the source bucket AND list, upload, and set ACLs on the destination bucket, then attach this policy to the executing user.

  • D) In the destination account, create an IAM policy that allows the destination account user to list and read from the source bucket AND list, upload, and set ACLs on the destination bucket, then attach this policy to the executing user.

  • E) Execute aws s3 sync as a user in the source account, specifying both source and destination bucket URIs.

  • F) Execute aws s3 sync as a user in the destination account, specifying both source and destination bucket URIs.

Correct Answer
#

Options B, D, and F

Step-by-Step Winning Logic
#

This scenario tests your understanding of AWS’s dual-authorization model for cross-account S3 access and the critical concept of execution context.

Why This Combination Works:

  1. Option B (Source Bucket Policy): Grants the destination account’s principals the ability to read from the source bucket. This is the resource-based permission that says “Account B, you may read my data.”

  2. Option D (Destination Account IAM Policy): Grants the destination account’s user the identity-based permissions to:

    • Access the source bucket (which is allowed by B)
    • Write to the destination bucket (which they own)
    • Set ACLs on uploaded objects
  3. Option F (Execute from Destination Account): When you execute aws s3 sync as a destination account user, AWS evaluates permissions from the caller’s perspective. The user has:

    • IAM permission to read source (from D)
    • Resource permission from source bucket (from B)
    • Full control over destination bucket (same-account access)

The Cross-Account Authorization Model: Cross-account access requires BOTH:

  • The resource owner (Account A) must grant permission (resource policy)
  • The accessing principal (Account B user) must have permission to use those grants (identity policy)

💎 Professional-Level Analysis
#

This section breaks down the scenario from a professional exam perspective, focusing on constraints, trade-offs, and the decision signals used to eliminate incorrect options.

🔐 Expert Deep Dive: Why Options Fail
#

This walkthrough explains how the exam expects you to reason through the scenario step by step, highlighting the constraints and trade-offs that invalidate each incorrect option.

Prefer a quick walkthrough before diving deep?
[Video coming soon] This short walkthrough video explains the core scenario, the key trade-off being tested, and why the correct option stands out, so you can follow the deeper analysis with clarity.

🔐 The Traps (Distractor Analysis)
#

This section explains why each incorrect option looks reasonable at first glance, and the specific assumptions or constraints that ultimately make it fail.

The difference between the correct answer and the distractors comes down to one decision assumption most candidates overlook.

Why not A?

  • Fatal Flaw: Buckets cannot be principals. Option A attempts to grant permissions to the “source bucket” rather than to Account A’s principals or Account B’s principals. S3 bucket policies must specify AWS principals (accounts, users, roles), not other S3 buckets.
  • Cost Impact: Implementing this would waste 2-4 hours of troubleshooting ($300-600 in engineer time).

Why not C?

  • Execution Context Mismatch: This creates a source account user with cross-account write permissions to the destination bucket, which would work ONLY if paired with Option E.
  • The Critical Miss: Without a bucket policy on the destination bucket granting Account A write permissions, the source account user cannot write to Account B’s bucket (same-account ownership assumption fails).
  • Security Anti-pattern: Granting a legacy account ongoing write access to a new security boundary violates least-privilege principles.

Why not E?

  • Missing Destination Authorization: For a source account user to write to the destination bucket, the destination bucket must have a resource policy explicitly granting Account A write permissions.
  • Operational Debt: This approach requires maintaining credentials in the legacy account post-migration, creating unnecessary security surface area.

The C+E Combination Trap: Many candidates choose C+E thinking “if the source user has IAM permission to write to the destination, it should work.” This fails because:

  • IAM policies grant permission to attempt actions
  • S3 bucket ownership rules require the destination bucket to accept writes from Account A
  • Without a bucket policy on the destination (not provided in any option that works with C+E), cross-account writes are denied

💎 Professional Decision Matrix

This SAP-C02 professional section is locked.
Free beta access reveals the exam logic.

100% Free Beta Access

🔐 The Solution Blueprint
#

This blueprint visualizes the expected solution, showing how services interact and which architectural pattern the exam is testing.

Seeing the full solution end to end often makes the trade-offs—and the failure points of simpler options—immediately clear.

graph TB
    subgraph Account_A["Account A (111111111111)
Source Account"] SourceBucket[("S3: globaldataops-legacy-warehouse
850 TB Data")] SourceBucketPolicy["Bucket Policy
Allow Account B: s3:ListBucket, s3:GetObject"] end subgraph Account_B["Account B (222222222222)
Destination Account"] DestUser["IAM User: migration-operator"] DestUserPolicy["IAM Policy
• Source: List/Get
• Dest: List/Put/PutACL"] DestBucket[("S3: globaldataops-analytics-target
Empty")] end DestUser -->|"1. Authenticates with"| DestUserPolicy DestUserPolicy -->|"2. Grants permission to read"| SourceBucket SourceBucketPolicy -->|"3. Authorizes Account B access"| SourceBucket DestUser -->|"4. aws s3 sync s3://source s3://dest"| CLI["AWS CLI Execution"] CLI -->|"5. Read (authorized by B+Source Policy)"| SourceBucket CLI -->|"6. Write (same-account access)"| DestBucket DestUserPolicy -->|"7. Grants PutObject/ACL"| DestBucket style SourceBucket fill:#FF9999,stroke:#333,stroke-width:2px style DestBucket fill:#99CCFF,stroke:#333,stroke-width:2px style DestUser fill:#90EE90,stroke:#333,stroke-width:3px style CLI fill:#FFD700,stroke:#333,stroke-width:2px

Diagram Note: The destination account user (authenticated with both IAM and resource policies) pulls data from the source bucket and writes to the destination bucket, leveraging dual-authorization and same-account write privileges.

💎 Professional Decision Matrix

This SAP-C02 professional section is locked.
Free beta access reveals the exam logic.

100% Free Beta Access

🔐 The Decision Matrix
#

This matrix compares all options across cost, complexity, and operational impact, making the trade-offs explicit and the correct choice logically defensible.

At the professional level, the exam expects you to justify your choice by explicitly comparing cost, complexity, and operational impact.

Option Est. Complexity Est. Monthly Cost Pros Cons Security Impact
B+D+F (Correct) Medium $0 transfer (same region)
+$4,250 S3 storage (850TB × $0.005/GB)
Total: ~$4,250/mo
✅ Follows least-privilege
✅ Clean security boundary
✅ Execution from target account
✅ No lingering source credentials
⚠️ Requires understanding dual-authorization
⚠️ Two policy configurations
High - Minimal cross-account exposure
A+C+E Low (appears simple) N/A - Non-functional ❌ None - does not work ❌ Buckets cannot be principals
❌ Violates S3 policy syntax
❌ 4-8 hours debugging time ($600-1200)
N/A - Configuration fails
C+E (without dest bucket policy) Medium N/A - Access Denied ⚠️ Seems logical to source-side engineers ❌ Missing destination authorization
❌ Requires additional bucket policy not listed
❌ 2-3 hours troubleshooting ($300-450)
Medium - Creates source-account dependency
B+C+E (hypothetical) Medium-High $4,250/mo storage
+$180/mo IAM policy mgmt overhead
⚠️ Would technically work if dest bucket policy added ❌ Requires unlisted bucket policy on destination
❌ Violates separation of concerns
❌ Security audit risk: $15K-25K
Low - Legacy account retains write access post-migration
Manual cross-account role assumption High $4,250/mo storage
+$350/mo Lambda/Step Functions orchestration
Total: ~$4,600/mo
✅ More granular control
✅ Auditable via CloudTrail role sessions
❌ Overengineering for one-time migration
❌ 16-24 hours development time ($2,400-3,600)
❌ Ongoing Lambda invocation costs
High - But unnecessarily complex

FinOps Key Insight:

  • The correct solution (B+D+F) costs $0 in data transfer (same-region) and requires ~3 hours of senior engineer time ($450-600) for policy configuration and testing.
  • Choosing the wrong pattern (C+E) wastes 5-8 hours ($750-1,200) in troubleshooting, plus potential emergency consulting fees ($200-350/hour).
  • For 850 TB, using AWS DataSync instead would add $21,250 in transfer costs (850,000 GB × $0.025/GB), making CLI the economically rational choice.

💎 Professional Decision Matrix

This SAP-C02 professional section is locked.
Free beta access reveals the exam logic.

100% Free Beta Access

🔐 Real-World Practitioner Insight
#

This section connects the exam scenario to real production environments, highlighting how similar decisions are made—and often misjudged—in practice.

This is the kind of decision that frequently looks correct on paper, but creates long-term friction once deployed in production.

Exam Rule
#

For SAP-C02, when you see cross-account S3 access via CLI, always remember:

  1. Dual-authorization required: Resource policy (bucket) + Identity policy (IAM)
  2. Execution context matters: The calling principal must have permissions in BOTH directions
  3. Destination-side execution is preferred for write operations to avoid granting write-back permissions to legacy accounts.

Real World
#

In production environments, we would enhance this solution with:

  1. S3 Batch Operations with Inventory: For 850 TB, we’d use S3 Inventory to generate a manifest, then S3 Batch Operations for parallel, resumable transfers with built-in retry logic (costs ~$2,125 vs. $0 for CLI, but provides job tracking and automatic retry).

  2. VPC Endpoint for S3: If bandwidth costs were a concern (cross-region scenario), we’d use VPC endpoints to avoid NAT Gateway charges ($0.045/GB), saving ~$38,250 for 850 TB.

  3. S3 Replication with RTC: For ongoing synchronization (not one-time migration), we’d use S3 Replication with Replication Time Control (RTC), costing ~$21,250 (850 TB × $0.025/GB) but providing 99.99% 15-minute RPO SLA.

  4. Temporary Cross-Account Role: Instead of IAM user policies, we’d create a time-limited cross-account role in Account B that Account A can assume, with session duration limits (1-12 hours) and automatic credential expiration.

  5. Cost Consideration Not in Exam: The exam doesn’t mention S3 Storage Class. In reality, if this is “cold” analytics data, we’d migrate directly to S3 Glacier Flexible Retrieval ($0.0036/GB vs. $0.023/GB Standard), saving $205,700/year (850,000 GB × $0.0194/GB savings × 12 months).

The Certification Simplification: The exam assumes:

  • Same-region transfer (no data transfer costs)
  • Immediate access requirement (no Glacier)
  • One-time migration (no replication)
  • Standard S3 storage class

In an actual $1M project, the storage class decision alone would justify a full cost-benefit analysis, potentially involving S3 Intelligent-Tiering (auto-optimization) or Glacier with selective retrieval.

💎 Professional Decision Matrix

This SAP-C02 professional section is locked.
Free beta access reveals the exam logic.

100% Free Beta Access