How GKE Networking Works
Google Kubernetes Engine (GKE) networking is fundamentally different from EKS and many other Kubernetes providers.
Instead of assigning Pods IPs directly from the primary subnet range, GKE uses:
- Alias IP ranges
- Secondary subnet CIDR blocks
- Separate ranges for nodes, pods, and services
This design makes GKE networking highly scalable — but it introduces planning complexity that many teams misunderstand.
This guide explains exactly how GKE networking works in 2026.
1️⃣ The Core Difference: Pods Use Secondary IP Ranges
In VPC-native GKE clusters (default and recommended mode):
✔ Nodes receive IPs from the primary subnet range
✔ Pods receive IPs from a secondary subnet range
✔ Services receive IPs from a secondary range (or a GKE-managed service range)
This separation dramatically reduces subnet exhaustion risk compared to shared-IP models.
However, it requires correct planning of:
- Primary node subnet size
- Pod secondary range size
- Service IP range size
2️⃣ What Are Alias IP Ranges?
GKE uses alias IPs to assign Pod addresses.
Instead of creating separate virtual interfaces per Pod:
- The node is assigned a CIDR slice from the Pod secondary range
- Pods use IPs from that allocated slice
- Routing happens natively inside the VPC
There is no overlay encapsulation.
Alias IPs integrate directly with VPC routing tables, which provides:
✔ Native performance
✔ Clean routing
✔ Simplified network policy enforcement
3️⃣ Primary vs Secondary CIDR Ranges
A typical GKE subnet contains:
Primary Range
Used for:
- Node IP addresses
- Internal load balancers
Example: 10.0.0.0/24
Secondary Range (Pods)
Used for:
- Pod IP addresses
Example: 10.1.0.0/16
Secondary Range (Services)
Used for:
- ClusterIP services
Example: 10.2.0.0/20
Modern nuance (2026):
In newer GKE versions (Standard ≥ 1.29, Autopilot ≥ 1.27), Google often assigns Service IPs from a GKE-managed default range (34.118.224.0/20), meaning you may not need to define a custom service secondary range unless you require full control.
4️⃣ How Pod IP Allocation Works in GKE
When a node joins a cluster:
- GKE assigns it a CIDR slice from the Pod secondary range.
- That CIDR slice determines how many Pods the node can run.
- Pods receive IPs from that slice.
By default:
- GKE allocates a /24 alias IP slice per node
- This supports up to ~110 Pods per node (default Kubernetes max)
This means:
- Pod scaling is constrained by the Pod secondary range size
- Node scaling is constrained by the Primary subnet range
- Per-node Pod density is constrained by the allocated alias slice
These scaling dimensions are independent.
5️⃣ Pod Density Formula in GKE
At a high level:
Max nodes supported = PodSecondaryRangeSize / PerNodeCIDRSize
Example:
If:
- Pod range =
/16(65,536 IPs) - Per-node allocation =
/24(256 IPs per node)
Then:
65,536 / 256 = 256 nodes maximum
Each node can run up to ~110 Pods (default), not the full 256 IPs — because Kubernetes sets a practical per-node Pod limit.
This is why both:
- Total Pod secondary range size
- Per-node alias slice size
…must be considered during planning.
6️⃣ Why GKE Clusters Run Out of IPs
There are three common failure modes:
1️⃣ Pod Secondary Range Too Small
If undersized:
- Nodes cannot receive additional CIDR slices
- Cluster autoscaler fails to scale
- Pods fail to schedule
2️⃣ Primary Node Subnet Too Small
If primary subnet capacity is insufficient:
- New nodes cannot be created
- Autoscaling fails
- Cluster expansion stalls
3️⃣ Service Range Exhaustion
If using a user-managed service secondary range and it is too small:
- New ClusterIP services cannot allocate addresses
- Service creation fails
7️⃣ Advanced: Multiple Pod CIDR Ranges
Modern GKE supports multiple Pod secondary ranges attached to the same subnet.
This allows:
- Expanding Pod address space without recreating the VPC
- Assigning different Pod CIDRs to different node pools
This is useful in large or long-lived production environments.
8️⃣ IPv6 and Dual-Stack Support
GKE supports:
- IPv6-only clusters
- Dual-stack IPv4/IPv6 clusters
In dual-stack mode:
- Pods and Services receive IPv6 addresses from dedicated IPv6 ranges
- IPv4 remains available depending on configuration
IPv6 greatly reduces address exhaustion risk — but requires VPC IPv6 configuration and careful planning.
9️⃣ GKE Standard vs Autopilot Networking
GKE Standard
You control:
- Node pools
- Pod density
- Secondary ranges
- CIDR planning
Requires explicit IP capacity planning.
GKE Autopilot
Google manages:
- Node provisioning
- Pod density
- Much of operational overhead
However:
You still must size primary and Pod secondary ranges correctly.
Autopilot reduces operational complexity — not CIDR planning responsibility.
🔟 Comparison: GKE vs EKS Networking Model
| Feature | GKE | EKS |
|---|---|---|
| Pods use primary subnet | No | Yes |
| Secondary ranges required | Yes | No |
| Alias IP model | Yes | No |
| Subnet pressure | Lower | Higher |
| Independent Pod scaling | Yes | No |
GKE’s alias IP model provides:
✔ Cleaner separation
✔ Predictable scaling
✔ Reduced subnet exhaustion risk
But still requires deliberate CIDR planning.
1️⃣1️⃣ Best Practices for Production GKE Clusters (2026)
- Always use VPC-native (alias IP) clusters.
- Allocate a large Pod secondary range (often
/16for production). - Plan primary subnet size for future node scaling.
- Understand default
/24per-node alias slice behavior. - Add 25–30% growth buffer.
- Consider multiple Pod CIDRs for long-lived clusters.
- Document service IP allocation strategy (managed vs custom).
It is significantly easier to start large than to migrate later.
Example Production Layout
Example for a medium production environment:
Primary subnet: 10.0.0.0/22
Pod secondary range: 10.10.0.0/16
Service secondary range (optional): 10.20.0.0/20
This allows:
- Hundreds of nodes
- Thousands of Pods
- Safe long-term scaling
Use the GKE Subnet Calculator
To safely plan your cluster:
It calculates:
- Required Pod CIDR range
- Required node subnet size
- Growth buffer impact
- Recommended prefix lengths
Final Thoughts
GKE networking is powerful because:
Pods and nodes scale independently using secondary CIDR ranges.
This architecture provides predictable scaling and clean separation of IP domains.
But if you underestimate secondary range size, scaling failures will still occur.
If you understand:
- Primary ranges
- Secondary ranges
- Alias IP allocation
- Per-node CIDR slicing
…you understand GKE networking.
And that understanding prevents the most common scaling failures in Google Kubernetes Engine.
Want to understand how this compares to other Kubernetes providers?
→ Kubernetes Networking Comparison Guide