GKE Subnet Calculator (Google Kubernetes Engine CIDR Planner)

Correct GKE subnet sizing requires planning three separate IP ranges:

  • Primary subnet (nodes)
  • Pod secondary range
  • Service secondary range

Unlike EKS, GKE uses alias IP ranges, which separates node and pod addressing — making it more scalable, but requiring proper CIDR planning.

Use this calculator to size your GKE cluster safely before deployment.


If you’re unsure what values to use, these are realistic production assumptions.

Max pods per node

  • Default Kubernetes limit: 110
  • Common production planning value: 64–110
  • Conservative value: 64

If unsure, use 110 for safe upper-bound planning.


Growth buffer

  • Recommended minimum: 25%
  • Fast-growing environments: 40%+
  • Enterprise production: 30%

Always plan for future scaling, not day-one capacity.


How GKE Networking Works (Quick Summary)

In VPC-native GKE clusters:

  • Nodes receive IPs from the primary subnet range
  • Pods receive IPs from a secondary range
  • Services receive IPs from another secondary range

This means:

Node scaling and pod scaling are independent — but both must be sized correctly.

If you want the full explanation:

Read the GKE Networking Deep Dive


What This GKE Subnet Calculator Calculates

The planner determines:

1️⃣ Required primary subnet size (nodes)

Based on:

  • Planned node count
  • Growth buffer

Required node IPs:

ceil(plannedNodes × bufferFactor)


2️⃣ Required pod secondary range size

Based on:

  • Planned nodes
  • Max pods per node
  • Growth buffer

Required pod IPs:

ceil(plannedNodes × maxPodsPerNode × bufferFactor)


Service IP usage is typically much smaller than pod IP usage.

A common production guideline:

  • /20 for medium clusters
  • /22 for smaller clusters
  • Larger if running heavy internal service usage

Common GKE Subnet Planning Mistakes

  • Making the primary subnet too small for future node scaling
  • Underestimating pod secondary range size
  • Forgetting growth buffer
  • Not allocating a dedicated service range
  • Attempting to resize secondary ranges after deployment (painful)

Example: Medium Production GKE Cluster

Assume:

  • 20 nodes
  • 100 max pods per node
  • 30% growth buffer

Node IP requirement

20 × 1.3 = 26 nodes

Primary subnet must support at least 26 usable IPs.


Pod IP requirement

20 × 100 × 1.3 = 2,600 IPs

Smallest prefix that supports this:

/20 (4,096 IPs)


Why GKE Rarely “Suddenly Runs Out” Like EKS

Because:

  • Pods do not consume primary subnet IPs
  • Secondary ranges isolate scaling domains
  • Alias IP allocation is predictable

However, if the pod secondary range is undersized:

  • Nodes cannot receive additional CIDR slices
  • Autoscaling fails
  • Cluster expansion stalls

When to Use Larger Pod Ranges

Consider allocating a /16 pod range if:

  • Running multi-team workloads
  • Expecting large scaling events
  • Running high pod density nodes
  • Building a long-lived production cluster

It is significantly easier to start large than to migrate later.


FAQ – GKE Subnet Planning

Can I resize GKE secondary ranges later?

Not easily.

In most cases, you must recreate the cluster to change secondary CIDR sizes.


Do pods consume primary subnet IPs in GKE?

No. Pods use secondary alias IP ranges.


What happens if my pod secondary range runs out?

  • New nodes cannot be allocated pod CIDR slices
  • Pods fail to schedule
  • Cluster autoscaler stops scaling