How to Size Subnets for Kubernetes Clusters

Sizing subnets for Kubernetes is not the same as sizing a subnet for virtual machines.

A Kubernetes cluster can consume IP addresses from:

  • Nodes
  • Pods
  • Services
  • Load balancers and supporting platform components

That means “How big should my subnet be?” is really a cluster networking question, not just a CIDR math question.

This guide explains how to size Kubernetes subnets safely, how pod IP allocation changes the answer, and how the major managed platforms differ in 2026.


The First Question: Where Do Pod IPs Come From?

The most important Kubernetes subnet sizing question is:

Do pods consume IP addresses from the same subnet as nodes?

If the answer is yes, subnet pressure rises quickly.

If the answer is no, subnet sizing becomes easier because node and pod growth are separated.

This is the core difference between:

  • Shared-subnet models
  • Overlay models
  • Secondary-range models

Examples:

ModelNode IP SourcePod IP SourceSubnet Pressure
Shared subnetPrimary subnetSame subnet as nodesHigh
OverlayPrimary subnetSeparate pod CIDRLower
Secondary rangesPrimary subnetSeparate secondary rangeModerate / predictable

If you only remember one thing from this article, remember this:

Kubernetes subnet sizing starts with pod IP allocation, not subnet mask math.


The Four Things You Must Size

In most real Kubernetes environments, you need to think about four IP domains:

1. Node subnet

Every node needs an IP address.

That sounds simple, but you must still account for:

  • Current nodes
  • Future nodes
  • Autoscaler surge
  • Rolling upgrades
  • Cloud-reserved addresses

2. Pod CIDR or pod range

Pods may come from:

  • The same subnet as nodes
  • A dedicated overlay CIDR
  • A secondary subnet/range

This is usually the largest IP consumer in the cluster.

3. Service CIDR

Kubernetes Services also consume IP addresses.

Service ranges are often smaller than pod ranges, but they still matter. An undersized service CIDR can become a hidden scaling limit.

4. Platform overhead

Depending on the provider and design, you may also need headroom for:

  • Control plane integration
  • Load balancers
  • Private endpoints
  • Additional node pools
  • Blue/green migration capacity

Ignoring this is how “the math looked fine” turns into a failed production scale event.


The Basic Sizing Formula

At a high level, subnet planning usually starts with:

Node IP requirement

planned nodes + growth headroom

Pod IP requirement

planned nodes × max pods per node × growth headroom

Service IP requirement

planned services + growth headroom

Then you apply:

  • Cloud reserved IP rules
  • Per-node pod CIDR slice behavior
  • Separate subnet or secondary range behavior

This is why Kubernetes subnet sizing is different from classic “how many hosts fit in a /24?” subnetting.

If you want a quick lookup table for common masks and host counts while reading:

CIDR Cheat Sheet


Why maxPods Matters So Much

The most commonly underestimated input is:

Max pods per node

Kubernetes subnet sizing is usually driven more by potential pod density than by current usage.

For example:

  • 20 nodes
  • 50 max pods per node

This creates capacity for:

20 × 50 = 1,000 pod IPs

Even if you are only running 250 pods today, the network design must support the maximum scheduling envelope if you want scaling and upgrades to work safely.

That is why many clusters “run out of IPs unexpectedly” even though current workload counts look modest.


Why /24 Often Fails in Production

A /24 sounds large when you are thinking about virtual machines.

In Kubernetes, it is often not.

A /24 gives:

  • 256 total IPs
  • Fewer usable IPs after provider reservations

If pods and nodes share the same subnet, that can disappear very quickly.

Example:

  • 10 nodes
  • 30 max pods per node
  • Shared-subnet model

Required addresses:

  • 10 node IPs
  • 300 pod IPs
  • growth buffer

That already exceeds the practical capacity of a /24.

This is why many production Kubernetes designs start at /23, /22, /21, or larger depending on the networking model.


Shared Subnet vs Separate Pod CIDR

This is the practical subnet sizing split that matters most.

Shared subnet model

In this model:

  • Nodes use subnet IPs
  • Pods also use subnet IPs

This means the node subnet must absorb:

  • Node growth
  • Pod growth
  • Reserved cloud IPs

This is the most IP-intensive model.

Common examples:

  • AKS Azure CNI (flat)
  • EKS with AWS VPC CNI

Separate pod CIDR model

In this model:

  • Nodes use the node subnet
  • Pods use a separate overlay CIDR or secondary range

This makes subnet planning cleaner because node and pod scaling are separated.

Common examples:

  • AKS Overlay
  • Kubenet
  • ARO cluster network
  • GKE pod secondary range

This is usually safer when address space is tight.


Growth Buffer Is Not Optional

Subnet sizing that only covers day-one requirements is usually wrong.

A reasonable Kubernetes growth buffer often includes:

  • 20–30% extra headroom
  • Temporary upgrade capacity
  • Autoscaler bursts
  • Additional node pools later

If your design only fits the exact minimum, it is already too small.

This is especially true when:

  • multiple teams share the cluster
  • cluster autoscaling is enabled
  • blue/green rollout patterns are used
  • the organization is CIDR-constrained

Provider Differences That Change the Math

The sizing process is similar across platforms, but the implementation details are not.

AKS

AKS can behave in two very different ways:

  • Azure CNI flat: nodes and pods share the Azure subnet
  • Azure CNI Overlay / Kubenet: nodes use subnet IPs, pods use a separate CIDR

Azure also reserves 5 IP addresses per subnet.

How AKS Networking Works
AKS Subnet Calculator

EKS

In common EKS designs:

  • Nodes use VPC subnet IPs
  • Pods also use VPC subnet IPs

AWS reserves 5 IP addresses per subnet, and ENI/IP-per-instance limits can become a separate scaling ceiling.

How EKS Networking Works
EKS Subnet Calculator

GKE

GKE VPC-native clusters typically separate:

  • Primary subnet for nodes
  • Secondary range for pods
  • Secondary range for services

Google Cloud reserves 4 IPs per range, and the pod secondary range must be large enough to support per-node CIDR slicing.

How GKE Networking Works
GKE Subnet Calculator

ARO

ARO usually requires:

  • Master subnet
  • Worker subnet
  • Separate cluster network CIDR for pods

Worker subnets and pod CIDRs must be sized independently.

How ARO Networking Works
ARO Subnet Calculator

If you want a provider-neutral starting point first:

Kubernetes Pod CIDR Calculator

If you want cloud-specific planning across platforms:

Cloud Kubernetes Subnet Planner


A Simple Sizing Workflow

Use this process before choosing any final CIDR:

1. Decide the networking model

Determine whether your cluster uses:

  • shared subnet IPs
  • overlay pod CIDR
  • secondary pod/service ranges

Without this, the rest of the math is not trustworthy.

2. Estimate peak node count

Do not use current node count only.

Include:

  • future node growth
  • cluster autoscaler expansion
  • extra node pools
  • temporary upgrade capacity

3. Set max pods per node

Use the configured or intended maxPods value, not current pod count.

4. Size the pod range

Calculate the maximum pod addressing requirement:

planned nodes × maxPods × growth factor

5. Size the node subnet

Then calculate the node subnet separately if your design allows it, or combine node and pod requirements if your platform uses a shared-subnet model.

6. Add service CIDR headroom

Service IPs are easy to forget and annoying to fix later.

7. Validate against provider rules

Before deploying, verify:

  • reserved IP behavior
  • per-node pod slice allocation
  • service-specific subnet requirements
  • available address space in the wider VNet / VPC design

Common Kubernetes Subnet Sizing Mistakes

The most common mistakes are:

  • Treating Kubernetes like VM-only subnet planning
  • Forgetting that pods may consume subnet IPs
  • Using current pod count instead of maxPods
  • Ignoring growth buffer
  • Overlooking service CIDR sizing
  • Using /24 because it “sounds large enough”
  • Forgetting cloud-reserved addresses
  • Not planning for upgrades or node pool expansion

Most of these mistakes do not break the cluster immediately.

They break it later, when the cluster needs to scale.


When You Should Choose a Larger CIDR

Start larger than the minimum when:

  • the cluster is production-critical
  • multiple teams share it
  • rapid growth is likely
  • IP renumbering later would be painful
  • corporate address space policy allows it

Kubernetes subnet problems are much easier to prevent than to repair.


Final Rule of Thumb

When sizing Kubernetes subnets:

  1. Find where pod IPs come from.
  2. Size for maximum pod density, not current pod count.
  3. Add growth headroom.
  4. Account for reserved addresses and platform constraints.
  5. Separate node, pod, and service ranges whenever the platform supports it.

That sequence prevents most production CIDR mistakes.