Kubernetes Subnet Size Calculator (AKS, ARO, EKS & GKE)
Use this calculator to estimate node subnet and pod CIDR sizes before deploying Kubernetes clusters in Azure, AWS, or Google Cloud.
It supports:
- AKS (Azure CNI, Azure CNI Overlay, Kubenet)
- ARO (Azure Red Hat OpenShift)
- EKS (AWS VPC CNI typical model)
- GKE (VPC-native clusters with secondary ranges)
The planner models:
- Cloud provider reserved IP behavior
- Shared-subnet vs split-range networking
- Growth headroom
- Validation of existing CIDR prefixes
Not sure which provider model fits your architecture?
→ Compare Kubernetes Networking Models
What This Planner Calculates
The planner estimates required IP capacity using:
Total planned nodes
current nodes + future growthNode IP requirement
planned nodes × growth bufferPod IP requirement
planned nodes × max pods per node × growth buffer
Then, based on provider networking behavior, it selects the smallest CIDR prefix where: usable IPs ≥ required IPs + cloud-reserved IPs
Inputs
- Cloud provider (AKS / ARO / EKS / GKE)
- AKS network mode (Azure CNI / Overlay / Kubenet)
- ARO control plane node count
- Number of worker nodes
- Max pods per node
- Growth buffer (%)
- Optional future node growth
- Optional current CIDR prefixes (to detect undersized ranges)
Outputs
Depending on provider, the planner returns:
- Required node subnet size
- Required pod CIDR size
- Reserved IP impact per subnet/range
- Suggested CIDR prefix recommendations
- Validation warnings if an existing range is too small
Provider Networking Models Explained
Understanding how each platform allocates IPs is critical.
AKS Subnet Sizing
Azure CNI (Flat / Shared Subnet)
- Pods and nodes consume IPs from the same Azure subnet
- Azure reserves 5 IPs per subnet
- Planner calculates:
- One subnet large enough for nodes + pods combined
This is the most common source of unexpected subnet exhaustion.
Azure CNI Overlay
- Nodes consume Azure subnet IPs
- Pods use an overlay CIDR
- Azure reserves 5 IPs for the node subnet
- Pod CIDR does not consume Azure subnet IPs
Planner calculates:
- Node subnet size
- Separate pod CIDR size
Kubenet
- Nodes consume Azure subnet IPs
- Pods use a separate pod CIDR
- Azure reserves 5 IPs per subnet
Planning logic matches overlay mode from a subnet capacity perspective.
ARO (Azure Red Hat OpenShift)
ARO typically uses:
- A master subnet (control plane)
- A worker subnet
- A separate OpenShift cluster network CIDR for pods
Key behaviors:
- Azure reserves 5 IPs per subnet
- Pods use overlay cluster network CIDR (do not consume subnet IPs)
- Master and worker subnets must be sized independently
Planner calculates:
- Master subnet size
- Worker subnet size
- Pod cluster network CIDR
This reflects real-world ARO deployments more accurately than treating it like AKS.
EKS (Amazon EKS)
- Pods and nodes typically consume IPs from the same VPC subnet(s)
- AWS reserves 5 IPs per subnet
- Subnet must accommodate total (nodes + pods)
⚠ Important:
Even if CIDR math is sufficient, ENI/IP-per-instance limits may cap effective pod density depending on EC2 instance type.
The planner surfaces this limitation as guidance.
GKE (VPC-Native)
- Nodes use the primary subnet
- Pods use a secondary IP range
- Google Cloud reserves 4 IPs per subnet/range (first 2 + last 2)
Planner calculates:
- Primary node subnet size
- Secondary pod range size
GKE allocates pod IP blocks per node from the secondary CIDR.
Example Scenarios
Example 1: AKS Azure CNI (Flat)
Inputs:
- 20 nodes
- 30 max pods per node
- 25% buffer
- Future node growth: 10
Result:
- One subnet recommendation covering nodes + pods
- Azure 5-IP reserved impact included
- Validation warnings if current subnet too small
Example 2: AKS Azure CNI Overlay
Inputs:
- 30 nodes
- 50 max pods per node
- 20% buffer
Result:
- Node subnet recommendation (Azure reserved included)
- Separate pod overlay CIDR recommendation
Example 3: ARO Deployment
Inputs:
- 3 control plane nodes
- 25 worker nodes
- 40 max pods per node
- 25% buffer
Result:
- Master subnet recommendation
- Worker subnet recommendation
- Pod cluster network CIDR suggestion
Example 4: GKE VPC-Native
Inputs:
- 40 nodes
- 32 max pods per node
- 20% buffer
Result:
- Primary subnet suggestion
- Secondary pod range suggestion
- 4 reserved IP impact per range included
Example 5: EKS with Existing CIDR Validation
Inputs:
- 25 nodes
- 40 max pods per node
- 30% buffer
- Current subnet: /25
Result:
- Warning if subnet too small for total planned IPs
- Reminder that ENI limits may restrict effective pod scaling
Why This Planner Matters
Common real-world failures:
- AKS Azure CNI subnets running out of IPs unexpectedly
- EKS scaling blocked despite “enough CIDR”
- GKE secondary ranges sized too small
- ARO worker subnet exhaustion during scale events
Under-sizing subnets in Kubernetes environments is difficult and disruptive to fix later.
This planner helps you size correctly before deployment.