Enterprise Cloud Governance

Enterprise Cloud Hierarchy: AWS, Azure, and GCP Side by Side

Every cloud has a hierarchy. AWS calls them Organizations and OUs. Azure calls them Management Groups. GCP calls them Folders. They solve the same problem — but the way they do it shapes everything from your IAM strategy to your monthly bill.

AWS Azure GCP Enterprise Architecture Cloud Governance IAM Landing Zone

What's in this article

  1. The problem every enterprise hits on day two
  2. The hierarchy: Organizations, Management Groups, and Folders
  3. AWS: Organizations, OUs, and SCPs
  4. Azure: Management Groups and Policy
  5. GCP: Organization, Folders, and Projects
  6. Side-by-side comparison
  7. Cross-enterprise access: federated identity vs native IAM
  8. Billing boundaries and consolidated cost management
  9. Account boundary recommendation: one account per app per env
01

The problem every enterprise hits on day two

Day one in the cloud is simple. You create an account, spin up a server, and get something running. It is fast, it is empowering, and it feels like the future.

Day two is when reality arrives. A second team wants access. A third team spins up their own environment. Someone in finance asks: "Who is spending what?" Someone in security asks: "Who has access to what?" And an architect asks the question that should have come first: "How are we going to structure all of this?"

The organisational structure problem is not a cloud problem — it is a management problem. The cloud just makes it visible faster. A sprawl of unstructured accounts, subscriptions, or projects will cost you more, be harder to secure, and harder to audit than anything you ran on-premises.

Every major cloud provider has solved this with a hierarchy. A tree structure that lets enterprises group resources, apply policies at scale, and consolidate billing — without losing the autonomy that individual teams need to move fast. The hierarchy is different on each cloud. The mental model is the same.

02

The hierarchy: Organizations, Management Groups, and Folders

Think of it as a company org chart, but for your cloud resources. At the top sits the root — the entity that owns everything. Below it are grouping containers that exist purely to carry policy and structure. At the leaves of the tree sit the actual billing and resource boundaries: AWS accounts, Azure subscriptions, and GCP projects.

Usage of term container In this article, I use the term "container" to refer to any organizational unit in the cloud hierarchy that can hold resources or other containers. This includes AWS OUs, Azure Management Groups, and GCP Folders. This should not be confused with containerization technologies like Docker or Kubernetes, which are unrelated concepts.

Resources — virtual machines, databases, storage buckets — live inside those leaf containers. You cannot deploy a resource directly into a folder or a management group. The grouping layers are for governance, not for creating resoures.

The governance hierarchy across all three clouds
Enterprise cloud hierarchy comparison: AWS, Azure, GCP AWS Management Account Organization (Root node) Organizational Unit (OU) Child OU (nested) AWS Account ← billing boundary Resources (EC2, S3…) SCP Azure Entra ID Tenant Root Management Group Management Group Child Mgmt Group (nested) Subscription ← billing boundary Resource Group Resources (VM, SQL..) Policy Google Cloud Google Workspace / Identity Organization Node Folder Sub-Folder (nested) Project ← billing boundary Resources (GCE, GCS…) Org Policy = billing & resource isolation boundary (account / subscription / project)

Three clouds, one pattern. The leaf node — the AWS account, the Azure subscription, the GCP project — is always where the billing meter runs and where IAM permissions begin to mean something concrete. Everything above that leaf is governance scaffolding: a place to attach policies that cascade down.

03

AWS: Organizations, OUs, and SCPs

AWS launched Organizations in 2017 specifically to address enterprise multi-account sprawl. Before it existed, teams managed accounts in isolation — each with its own billing, its own IAM configuration, and no shared governance layer.

The structure

At the top sits the Management Account (formerly called the master account). This account owns the organisation and should contain nothing except organisational governance — no workloads, no application resources. Treat it as the keys to the kingdom: access should be minimal, audited, and break-glass only.

Below the root node of the organization sit Organisational Units (OUs). These are containers for accounts. OUs can be nested up to five levels deep. A common pattern is to mirror your business structure: a top-level OU for each business unit or environment type, then child OUs for individual applications or teams.

At the leaves sit AWS Accounts. These are the actual billing and resource isolation boundaries. Each account gets its own IAM namespace, its own VPC by default, and its own billing line in consolidated billing. Cross-account access is always explicit — nothing leaks between accounts without a deliberate trust relationship.

Service Control Policies (SCPs)

The most powerful governance mechanism in AWS Organizations is the Service Control Policy (SCP). SCPs are JSON IAM-like policies attached to OUs or accounts. They define the maximum permissions any principal inside that OU can ever have — they are a ceiling, not a grant.

Key SCP insight: Even if a developer has AdministratorAccess in their account, an SCP attached to their OU can prevent them from creating resources in certain regions, disabling CloudTrail, or accessing services your organisation has not approved. The SCP wins.

SCPs do not apply to the Management Account itself. They do not grant permissions — they only restrict. The effective permission of any principal is the intersection of what their IAM policies allow and what their SCP permits.

AWS Organizations — real-world OU layout with SCP inheritance
AWS Organizations OU structure with SCP policy inheritance Management Account No workloads · Billing · CloudTrail Root (Organization) SCP: DenyRoot·EnforceRegion OU: Infrastructure OU: Workloads OU: Security SCP: DenyDeleteTrail · RequireTags OU: Production OU: Non-Prod OU: Sandbox SCP: NoProduction·CostLimit Acct: AppA-Prod 123456789012 Acct: AppB-Prod 234567890123 Acct: AppA-Dev 345678901234 Acct: AppA-Stg 456789012345 AWS Account (billing boundary) SCP (deny ceiling — inherits down) OU (grouping only — no resources) * Infrastructure & Security OUs contain shared-services accounts (networking hub, log archive, security tooling) — not application workloads.

AWS Control Tower

For enterprises starting fresh, AWS Control Tower is the recommended way to set up this structure. It provisions a Landing Zone — a pre-built OU structure, a set of baseline SCPs called guardrails, and mandatory accounts for logging and auditing — in a few hours rather than weeks of manual configuration.

If you already have an existing Organisation with many accounts, Control Tower can be enrolled incrementally. It does not require a greenfield start.

04

Azure: Management Groups and Policy

Azure's equivalent of AWS Organizations is the Management Group hierarchy, anchored to an Entra ID Tenant (formerly Azure Active Directory). The tenant is a container for all the identities — every human and workload identity in your enterprise lives here.

The structure

The Root Management Group is created automatically when you first use Management Groups in a tenant. It is the parent of everything. Below it, you create Management Groups to mirror your organisation — by business unit, geography, environment, or application portfolio, depending on your needs.

The billing boundary in Azure is the Subscription. A subscription belongs to exactly one Management Group. Resources live inside subscriptions, further grouped into Resource Groups — which are logical containers within a subscription, not separate billing entities. Resource Groups are organisational only; costs roll up to the subscription.

Azure Policy

Where AWS uses SCPs as a deny ceiling, Azure uses Azure Policy. Policies can audit, deny, or auto-remediate resources that do not comply with organisational rules. They can be attached at any level of the hierarchy and inherit downward.

Key difference from SCPs: Azure Policy can do more than deny. It can auto-remediate — for example, automatically tagging untagged resources, or deploying a diagnostic settings extension to every new VM. SCPs in AWS can only allow or deny API calls. They cannot fix what already exists.

Initiatives (also called Policy Sets) are bundles of related policies deployed together — equivalent to attaching multiple SCPs in a package. Azure ships built-in initiatives for common frameworks: CIS Benchmarks, NIST SP 800-53, PCI-DSS, and ISO 27001.

Azure Management Group hierarchy with Policy inheritance
Azure Management Group hierarchy Entra ID Tenant Identity root · contoso.onmicrosoft.com Root Management Group Policy: Allowed Locations · MFA MG: Corp MG: Online MG: Platform Initiative: PCI-DSS · RequireTags MG: Production MG: Non-Production Sub: AppA-Prod Resource Groups inside Sub: AppB-Prod Resource Groups inside Sub: AppA-Dev Resource Groups inside Sub: AppA-Staging Resource Groups inside RG: frontend · RG: data Subscription (billing boundary) Resource Group (logical only) Policy / Initiative (inherits down)

RBAC in Azure

Azure Role-Based Access Control (RBAC) assignments can be made at any scope: Management Group, Subscription, Resource Group, or individual resource. Assignments inherit downward. A role assigned at the Management Group level applies to every subscription and resource within it. This is powerful — but it means your RBAC design must be deliberate. Over-privileged assignments at a high scope propagate widely and it could affect your environment in a negative manner.

05

GCP: Organization, Folders, and Projects

Google Cloud's hierarchy starts with a requirement that does not exist on the other clouds: you must have a Google Workspace or Cloud Identity domain before you can create an Organisation node. This is the identity anchor — GCP does not let you stand up an enterprise hierarchy without a managed identity store underneath it.

The structure

The Organisation is the root. It is automatically created when your Workspace or Cloud Identity domain is linked. Below the Organisation sit Folders, which are the GCP equivalent of OUs and Management Groups. Folders can be nested — a Folder can contain other Folders, up to ten levels deep.

The leaf node — the Project — is where all GCP resources live. A project has its own billing account link, its own API enablement, its own IAM namespace, and its own quota limits. There is no resource without a project. Every Cloud Storage bucket, every Compute Engine VM, every Cloud Run service belongs to a project.

Organisation Policies and IAM inheritance

GCP's governance layer is Organisation Policy — a set of constraints that can be attached at the Organisation, Folder, or Project level. Constraints are boolean (allow/deny) or list-based (allowed values). They cascade downward unless explicitly overridden at a lower level.

GCP-specific insight: IAM bindings in GCP are also inherited from parent to child — binding a role to a principal at the Folder level grants that role in all projects below that folder. But unlike AWS SCPs, there is no "ceiling" mechanism that blocks a child project from granting more permissive access than the parent. The parent can restrict via Org Policy constraints, but IAM inheritance is additive, not restrictive.
GCP Organisation, Folder, and Project hierarchy
GCP Organization Folder Project hierarchy with IAM and Org Policy Google Workspace / Cloud Identity Identity root · Required pre-requisite Organisation: acme.com Org Policy: AllowedPolicyMember Folder: shared-infra Folder: business-units Folder: sandbox Org Policy: DisableServiceAccountKey Folder: team-payments Folder: team-platform IAM: payments-devs@acme.com Project: pay-prod Billing acct linked proj-id: acme-pay-prod-3f8a Project: pay-dev Billing acct linked proj-id: acme-pay-dev-7c2b Project: shared-vpc Host project Shared VPC hub Project (billing boundary) Folder (grouping + IAM + Org Policy) Shared VPC cross-project link
06

Side-by-side comparison

The conceptual model is parallel enough that you can map the layers directly. The differences are in the details — how policy works, where identity lives, and what the leaf node gives you.

Concept AWS Azure GCP
Identity root IAM (per account) · AWS SSO via IAM Identity Center Entra ID Tenant (mandatory) Google Workspace / Cloud Identity (mandatory)
Hierarchy root Organization (Administrative Root) Root Management Group Organisation node
Grouping container Organisational Unit (OU) · max 5 levels Management Group · max 6 levels Folder · max 10 levels
Billing boundary (leaf) AWS Account Subscription Project
Sub-billing grouping None (tags only) Resource Group (logical) Labels on resources
Policy enforcement SCP (deny ceiling · no auto-remediate) Azure Policy (deny + audit + remediate) Org Policy Constraints (deny · no remediate)
IAM inheritance Not inherited · explicit per account Inherited from MG → Sub → RG Inherited from Org → Folder → Project
Consolidated billing AWS Organisations (payer account) Enterprise Agreement / MCA Google Billing Account
Automation / guardrails AWS Control Tower Azure Landing Zones (ALZ) GCP Landing Zone (Terraform)
The critical IAM difference: In AWS, IAM does not cross account boundaries without explicit role assumption. In Azure and GCP, IAM assignments inherit downward through the hierarchy; e.g. if a user is assigned a role at management group (Azure) or folder (GCP) level, it inherits down to all the subscriptions (Azure) or projects (GCP) below that.
This makes AWS more isolated by default but more verbose to manage. Azure and GCP are easier to administer centrally but demand more discipline to prevent over-broad assignments at parent scopes.
07

Cross-enterprise access: federated identity vs native IAM

No enterprise runs a single application. People need access across accounts, subscriptions, and projects. Applications need to call APIs in other environments. And external partners need scoped, auditable entry points. Each cloud solves this differently — but every approach falls into one of two patterns: native role assumption or federated identity from an external IdP.

Pattern 1 — Native role assumption (cloud-native IAM trust)

In this pattern, identities inside one cloud boundary assume roles in another. No external identity provider is involved. Trust is established between cloud principals directly. This is more applicable in scenarios where resource in one account/subscription/project has to access resources in the other account/subscription/project.

Native cross-boundary role assumption — all three clouds
Native IAM cross-boundary access on AWS, Azure, GCP AWS Cross-Account Source Account EC2 / Lambda Target Account IAM Role + S3/RDS How it works 1. Target account creates IAM Role 2. Trust policy allows source account ID 3. Source calls sts:AssumeRole → gets temp creds Service: IAM Identity Center Central SSO across all accounts in Org Permission sets → assigned to users/groups Azure Cross-Subscription Source Sub App / Managed ID Target Sub Resource + RBAC Role How it works 1. App gets a Managed Identity (system/user) 2. Assign Azure RBAC role to Managed ID 3. Managed ID calls Azure AD → gets token Service: Managed Identity No secrets · auto-rotated tokens Works across subscriptions in same tenant GCP Cross-Project Source Project Service Account Target Project Resource + IAM binding How it works 1. Create Service Account in source project 2. Grant SA a role in target project IAM 3. SA gets token via metadata server (no keys) Service: Workload Identity SA impersonation for cross-project No downloaded key files — ever

Pattern 2 — Federated identity from an enterprise IdP

Most enterprises already have an identity provider — Microsoft Entra ID, Okta, Ping Identity, or an on-premises Active Directory. The preferred approach is to use that existing IdP as the source of truth and federate it into all three clouds, rather than maintaining separate identities in each cloud's native IAM. This is applicable for human access to the cloud area.

Federated identity — enterprise IdP as the single source of truth
Enterprise identity federation across AWS, Azure, GCP Enterprise IdP Entra ID · Okta · Ping · AD Groups: Developers · Admins · ReadOnly User logs in once SAML 2.0 / OIDC token issued AWS Service: IAM Identity Center IdP groups → Permission Sets Permission Sets → AWS Account + Role Temp credentials via STS (no IAM users) Azure Service: Entra ID (native) or external IdP External groups sync'd to Entra ID Groups → RBAC roles at MG / Sub scope OAuth2 / OIDC token for resource access GCP Service: Workforce Identity Federation External IdP → attribute mapping Mapped attributes → IAM conditions Short-lived tokens · no service account keys SAML 2.0 and OIDC are the federation protocols used by all three clouds. The IdP issues the assertion; the cloud validates it.

AWS: IAM Identity Center

IAM Identity Center (formerly AWS SSO) is the recommended way to manage human access across all accounts in an AWS Organisation. You connect your external IdP, sync groups, define Permission Sets (which are IAM role templates), and then assign those sets to accounts. Users get a single portal URL, log in once with their corporate credentials, and see exactly the accounts and roles they have access to. No IAM users. No permanent credentials.

Azure: Entra ID and RBAC

Azure has a significant structural advantage here: if your enterprise is already using Microsoft 365, your Entra ID tenant is also your Azure identity root. Groups created in Entra ID — the same groups used for Teams, SharePoint, and Exchange — can be directly assigned Azure RBAC roles. There is no separate sync step. If your enterprise uses a non-Microsoft IdP (Okta, Ping), you configure external federation into Entra ID, which then acts as the broker for all Azure resource access.

GCP: Workforce Identity Federation

Workforce Identity Federation is GCP's mechanism for connecting external IdPs to human access. Rather than downloading service account keys or creating Cloud Identity accounts for every user, Workforce Identity lets your users authenticate with your corporate IdP and exchange that assertion for short-lived GCP access tokens. Attribute mapping lets you control which IdP groups translate to which GCP IAM bindings — keeping your source of truth in the IdP, not inside GCP's IAM.

Cross-cloud recommendation: Regardless of which cloud you are on, the pattern is the same. Your corporate IdP (Entra ID, Okta, or equivalent) should be the single source of truth for human identities. All three clouds support federation from that IdP. Never create cloud-native user accounts for humans. Use groups, not individuals, when assigning roles — groups give you audit trails and make joiners-movers-leavers processes manageable.
08

Billing boundaries and consolidated cost management

The hierarchy is not just about access control — it is the foundation for how you see and manage your cloud spend. All three clouds consolidate billing at the root, but the mechanics differ.

🟠
AWS — Consolidated Billing

The Management Account is the payer. All member accounts' charges flow up to it. AWS Cost Explorer can break down costs by account, OU, tag, or service. Savings Plans and Reserved Instances can be shared across accounts in the organisation (with sharing enabled). Each account still gets its own cost view.

🔵
Azure — Enterprise Agreement / MCA

Azure billing is tied to an Enrollment (EA) or Microsoft Customer Agreement (MCA). Departments and Accounts within an enrollment map loosely to Management Groups and Subscriptions. Cost Management + Billing gives cross-subscription views. Subscriptions can be moved between Management Groups without changing billing — the billing structure is somewhat independent of the governance hierarchy.

🟣
GCP — Billing Account linked to Org

A Billing Account is linked to the Organisation and one or more Projects. Projects are assigned to a billing account — a project with no billing account cannot use paid services. You can have multiple billing accounts under one Organisation (e.g. one per business unit). BigQuery exports give you fine-grained spend analysis at the project, service, label, and SKU level.

Tags are not optional. On all three clouds, the billing boundary (account / subscription / project) gives you top-line cost isolation. But within a single account, you need tags (AWS), tags (Azure), or labels (GCP) to break costs down further by application, team, environment, or cost centre. Enforce tagging at deployment time — not retrospectively. SCPs, Azure Policy, and Org Policy constraints can all reject untagged resources at the API level.
09

Account boundary recommendation: one account per app per env

This is the question most enterprises get wrong in year one and spend years unwinding: how many accounts (or subscriptions, or projects) should we have?

The answer that AWS, Microsoft, and Google all converge on in their enterprise reference architectures is: one billing boundary per application per environment. Separate production from staging from development — and separate each application from every other application — at the account level, not just the VPC or the tag level.

Why not a shared account?

The temptation is to create one account per environment — a single production account for everything, a single dev account for everything. It feels easier to manage. In practice, it creates three problems that compound over time.

Recommended: one account / subscription / project per application per environment
One account per app per environment recommendation Application Production Staging Dev / Test Sandbox / Exp App A Payments acme-payments-prod Acct: 1111-2222-3333 OU: Workloads/Prod acme-payments-stg Acct: 2222-3333-4444 OU: Workloads/NonProd acme-payments-dev Acct: 3333-4444-5555 OU: Workloads/NonProd acme-payments-sbx Acct: 4444-5555-6666 OU: Sandbox App B Orders acme-orders-prod Acct: 5555-6666-7777 OU: Workloads/Prod acme-orders-stg Acct: 6666-7777-8888 OU: Workloads/NonProd acme-orders-dev Acct: 7777-8888-9999 OU: Workloads/NonProd acme-orders-sbx Acct: 8888-9999-0000 OU: Sandbox Shared Services acme-networking-hub · acme-security · acme-log-archive Transit Gateway / Shared VPC / vNet peering hub · GuardDuty / Defender · Centralised CloudTrail / Azure Monitor / Cloud Logging These accounts exist independently — managed by Platform team — not tied to any single application Key principles of this model: 1. Blast radius is contained — a misconfiguration in acme-payments-prod cannot affect acme-orders-prod. 2. Cost is unambiguous — every account maps to exactly one application and one environment. No tagging required for top-level attribution. 3. IAM is clean — each account starts with a fresh IAM namespace. No risk of policy entanglement between teams. 4. SCPs / Policy / Org Policy apply uniformly — Prod accounts get stricter guardrails than Sandbox accounts automatically via OU placement. 5. Account vending is automated — use Control Tower (AWS), ALZ Accelerator (Azure), or Terraform Landing Zone (GCP) to provision accounts.

How many accounts is too many?

A common objection is: "We have fifty applications. That means 200 accounts. That is unmanageable." This objection reflects the old mental model — where each account required manual setup and manual IAM configuration. With Landing Zone automation, an account is a unit of infrastructure provisioned from a template in minutes. The overhead per account in a well-automated Landing Zone is near zero.

AWS's own enterprise customers routinely operate organisations with 1,000 or more accounts. GCP enterprises run thousands of projects. Azure enterprises run hundreds of subscriptions. The hierarchy and automation layer are designed to handle this scale.

Naming convention

Adopt a consistent naming convention before your first account is created. A pattern like {org}-{app}-{env} — for example acme-payments-prod — makes accounts searchable, auditable, and self-documenting. The convention needs to be enforced at provisioning time, not retroactively applied.

The latest guidance from all three clouds (2025–2026): One billing boundary per application per environment. Automate provisioning with a Landing Zone. Use your corporate IdP for human access. Never create long-lived human credentials (IAM users, access keys). Enforce tags or labels at the API level. Design your OU / Management Group / Folder structure around policy needs, not just organisational structure — because SCPs and policies attach to the hierarchy, and the hierarchy determines what your guardrails actually do.

Useful Resources


Where to start

🟠
AWS — start here

Enable AWS Control Tower. Let it create the Landing Zone with root, Sandbox, and Workloads OUs. Enable IAM Identity Center. Connect your IdP. Then use Account Factory to vend new accounts per application per environment.

🔵
Azure — start here

Deploy the Azure Landing Zone Accelerator via the Azure Portal or Terraform. It creates the Management Group hierarchy, assigns built-in policies, and provisions platform subscriptions. Connect your existing Entra ID groups to RBAC roles at the Management Group scope.

🟣
GCP — start here

Use the GCP Landing Zone Terraform modules from Google Cloud's GitHub. They provision the Organisation structure, Shared VPC, log sinks, and base Org Policies. Enable Workforce Identity Federation for your IdP. Use Project Factory to create new projects per application per environment.

Found this useful? Share it —

✓ Link copied to clipboard
Mayank Pandey

About the Author

Mayank Pandey

AWS Community Hero and Cloud Architect with 15+ years of experience. AWS Solutions Architect Professional, FinOps Practitioner, and AWS Authorized Instructor. Creator of the KnowledgeIndia YouTube channel (80,000+ subscribers). Based in Melbourne, Australia.