How ARO Networking Works

Azure Red Hat OpenShift (ARO) combines OpenShift overlay networking with Azure VNet subnet requirements.

That combination is where most planning mistakes happen.

Many production issues come from:

  • Undersized worker subnets
  • Confusing Azure subnets with Pod/Service CIDRs
  • Forgetting Azure reserves 5 IPs per subnet
  • Not planning for node scaling and upgrade surge capacity

This guide explains how ARO networking works and how to size it correctly.


The Big Picture: Azure Subnets + OpenShift Overlay Networks

An ARO cluster uses:

Azure VNet subnets

  • Control plane (master) subnet
  • Worker subnet

OpenShift cluster networks (overlay)

  • Cluster network (Pods)
  • Service network (ClusterIP Services)
  • Machine network (Node IP range — typically the Azure subnet range)

Critical distinction:

Pods and Services do NOT consume Azure subnet IPs.
Azure subnets are primarily for node infrastructure.


Master and Worker Subnets in ARO

ARO requires two Azure subnets:

Control Plane (Master) Subnet

  • Hosts the OpenShift control plane nodes
  • Typically fixed at 3 nodes
  • Recommended to be /27 or larger

Worker Subnet

  • Hosts worker nodes
  • Scales with node pools and autoscaling
  • Recommended to be /27 or larger
  • This is the subnet that usually runs out first

Azure reserves 5 IP addresses per subnet (first four + last one).

Usable IPs = total - 5

Always include this reduction in your planning.


OpenShift Networking (Overlay)

ARO uses OVN-Kubernetes as its networking plugin.

This creates:

Cluster Network (Pod CIDR)

  • Default: 10.128.0.0/14
  • Overlay-based
  • Does not consume Azure subnet IPs

Service Network (Service CIDR)

  • Default: 172.30.0.0/16
  • Used for ClusterIP services
  • Also overlay-based

Important rule:

Pod and Service CIDRs must NOT overlap:

  • With each other
  • With the Azure VNet address space
  • With any peered or on-prem networks

Overlapping ranges cause routing conflicts and break cluster connectivity.


Per-Node Pod Allocation (/23 hostPrefix)

In ARO:

  • Each worker node is allocated a fixed /23 prefix
  • A /23 provides 512 IP addresses per node

This is defined at cluster creation and cannot be changed later.

Implication:

Cluster Network size must support:

(Max worker nodes) × (/23 blocks per node)

Example:

If cluster network = /14 (262,144 IPs)

Each node consumes /23 (512 IPs)

Max theoretical worker nodes:

262,144 / 512 = 512 nodes

(Practical scaling limits are lower, but this explains CIDR math.)


Why ARO Subnets Run Out of IPs

1️⃣ Worker subnet too small

If worker subnet cannot allocate new node IPs:

  • Autoscaler fails
  • Node pools cannot expand
  • Upgrades may stall

This is the most common ARO subnet failure.


2️⃣ Not planning for scale limits

Modern ARO clusters can scale to ~250 worker nodes.

Even if you start with 10 nodes:

  • Future growth
  • Additional node pools
  • Upgrade surge capacity

…all require extra IP headroom.


3️⃣ Upgrade surge capacity

During cluster upgrades:

  • Additional nodes may be temporarily provisioned
  • Rolling updates require spare capacity

Tightly sized subnets create upgrade risk.


ARO vs AKS Networking

FeatureAROAKS (Flat Mode)
Pods use Azure subnet IPsNoYes
Overlay networkingYesNo
Subnet pressure mainly fromNodesNodes + Pods
Pod IP routable in VNetNo (overlay)Yes

ARO behaves closer to AKS Overlay or GKE in terms of IP separation.

But Azure subnet sizing is still critical for node growth.


Best Practices for Production ARO (2026)

  1. Use at least /27 subnets for control plane and workers.
  2. Size worker subnet for long-term node scaling (not day-one).
  3. Add 25–30% growth buffer.
  4. Ensure Pod and Service CIDRs do not overlap any network ranges.
  5. Plan cluster network size based on /23 per node allocation.
  6. Account for upgrade surge nodes.
  7. Document network design before cluster creation (CIDRs cannot easily be changed later).

Example Production Layout

Azure VNet

  • Master subnet: 10.0.0.0/27
  • Worker subnet: 10.0.1.0/22

OpenShift overlay networks

  • Cluster network: 10.128.0.0/14
  • Service network: 172.30.0.0/16

This supports:

  • Stable control plane
  • Large worker node scaling
  • Thousands of pods
  • Long-term operational headroom

Use the ARO Subnet Calculator

To plan safely:

ARO Subnet Calculator

It helps estimate:

  • Master subnet size
  • Worker subnet size
  • Growth buffer impact
  • Azure reserved IP adjustments

Final Thoughts

ARO networking is simpler than flat Azure CNI in one key way:

Pods do not consume Azure subnet IPs.

But subnet planning still matters because:

  • Nodes scale
  • Node pools expand
  • Upgrades require surge capacity
  • Azure reserves 5 IPs per subnet
  • Pod CIDRs must not overlap any external networks

If you understand:

  • Master vs worker subnets
  • /23 per-node allocation
  • Azure reserved IP math
  • Non-overlapping CIDR design

…you can design ARO networks that scale cleanly for years.

Want to understand how this compares to other Kubernetes providers?

Kubernetes Networking Comparison Guide