Network Policies in Production: Securing Pod-to-Pod Communication

Table of Contents
Introduction
By late 2018, Network Policies had matured from a GA feature (Kubernetes 1.7) into a production-ready security control for Kubernetes clusters. With broader CNI plugin support and improved tooling, teams could implement micro-segmentation and enforce network-level security policies that complemented RBAC and pod security controls.
This mattered because network security was a critical layer in defense-in-depth strategies. While RBAC controlled API access and pod security controlled container privileges, Network Policies controlled pod-to-pod communication, preventing lateral movement and limiting the blast radius of security incidents.
Historical note: NetworkPolicy reached GA in Kubernetes 1.7 (June 2017), but 2018 saw broader adoption as CNI plugins improved support and teams gained experience with production deployments.
Network Policy Concepts
Core Principles
- Default Deny: By default, pods cannot communicate with each other unless explicitly allowed.
- Label-Based Selection: Policies use pod labels to select which pods they apply to.
- Ingress and Egress Rules: Control both incoming and outgoing traffic.
- Namespace Scoping: Policies are namespace-scoped but can reference pods in other namespaces.
Policy Structure
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: example-policy
namespace: production
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
CNI Plugin Support
Calico
Calico provides comprehensive NetworkPolicy support:
# Calico-specific policy features
apiVersion: projectcalico.org/v3
kind: NetworkPolicy
metadata:
name: calico-policy
spec:
selector: app == 'web'
ingress:
- action: Allow
source:
selector: app == 'frontend'
destination:
ports:
- 8080
Cilium
Cilium uses eBPF for high-performance NetworkPolicy enforcement:
# Cilium NetworkPolicy
apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
name: cilium-policy
spec:
endpointSelector:
matchLabels:
app: web
ingress:
- fromEndpoints:
- matchLabels:
app: frontend
toPorts:
- ports:
- port: "8080"
protocol: TCP
Weave Net
Weave Net provides NetworkPolicy support with its CNI plugin:
# Standard Kubernetes NetworkPolicy works with Weave
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: weave-policy
spec:
podSelector:
matchLabels:
app: web
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
Comparison: NetworkPolicy vs Service Mesh Security
| Capability | NetworkPolicy | Service Mesh (Istio/Linkerd) |
|---|---|---|
| Layer | L3/L4 (IP/port) | L7 (application layer) |
| Encryption | No (requires additional tools) | Yes (mTLS) |
| Performance | High (CNI-level) | Medium (proxy overhead) |
| Complexity | Low (simple rules) | High (complex configuration) |
| Observability | Limited | Rich (metrics, tracing) |
| Best For | Network segmentation | Application-level security |
Default Deny Pattern
Implement Default Deny
# Default deny all ingress traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
---
# Default deny all egress traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-egress
namespace: production
spec:
podSelector: {}
policyTypes:
- Egress
egress:
# Allow DNS
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
Allow Specific Traffic
# Allow frontend to access backend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: production
spec:
podSelector:
matchLabels:
app: backend
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
Micro-Segmentation Patterns
Tier-Based Segmentation
# Frontend tier
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: frontend-tier
spec:
podSelector:
matchLabels:
tier: frontend
ingress:
- from:
- namespaceSelector: {} # Allow from any namespace
ports:
- protocol: TCP
port: 80
egress:
- to:
- podSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 8080
---
# Backend tier
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-tier
spec:
podSelector:
matchLabels:
tier: backend
ingress:
- from:
- podSelector:
matchLabels:
tier: frontend
ports:
- protocol: TCP
port: 8080
egress:
- to:
- podSelector:
matchLabels:
tier: database
ports:
- protocol: TCP
port: 5432
Application-Based Segmentation
# Isolate applications from each other
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app-isolation
spec:
podSelector:
matchLabels:
app: app1
ingress:
- from:
- podSelector:
matchLabels:
app: app1
egress:
- to:
- podSelector:
matchLabels:
app: app1
Troubleshooting Network Policies
Common Issues
- “Pods can’t communicate”: Check if NetworkPolicy is blocking traffic.
- “DNS not working”: Ensure DNS egress is allowed.
- “Policy not applied”: Verify CNI plugin supports NetworkPolicy.
- “Unexpected blocking”: Review policy rules and pod labels.
Debugging Tools
# Check NetworkPolicy status
kubectl get networkpolicies
# Describe NetworkPolicy
kubectl describe networkpolicy <policy-name>
# Test connectivity
kubectl run -it --rm debug --image=busybox --restart=Never -- sh
# Inside pod: wget -O- http://target-pod:port
Best Practices
Policy Design
- Default Deny: Start with default deny, then allow specific traffic.
- Least Privilege: Allow only necessary traffic between pods.
- Label Consistency: Use consistent labels for policy matching.
- Documentation: Document why each policy exists.
Policy Management
- Version Control: Store NetworkPolicies in Git.
- Testing: Test policies in non-production environments.
- Monitoring: Monitor policy violations and adjust as needed.
- Review: Regularly review and update policies.
Practical Considerations
CNI Plugin Requirements
Not all CNI plugins support NetworkPolicy:
- Calico: Full support with advanced features.
- Cilium: Full support with eBPF performance.
- Weave Net: Full support.
- Flannel: Limited support (basic NetworkPolicy).
- kubenet: No NetworkPolicy support.
Performance Impact
NetworkPolicy enforcement has performance implications:
- CNI-Level: Policies enforced at CNI level (low overhead).
- Rule Complexity: Complex policies can impact performance.
- Policy Count: Many policies can slow down policy evaluation.
DNS Considerations
NetworkPolicies must allow DNS:
# Always allow DNS egress
egress:
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
Getting Started
# Create a NetworkPolicy
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: example-policy
namespace: default
spec:
podSelector:
matchLabels:
app: web
ingress:
- from:
- podSelector:
matchLabels:
app: frontend
ports:
- protocol: TCP
port: 8080
EOF
# Verify NetworkPolicy
kubectl get networkpolicies
kubectl describe networkpolicy example-policy
Caveats & Lessons Learned
- CNI Dependency: NetworkPolicy requires CNI plugin support; verify before deploying.
- Default Behavior: Without policies, all pods can communicate; implement default deny.
- DNS Requirements: Always allow DNS egress; pods need DNS to function.
- Label Management: Policies depend on pod labels; maintain label consistency.
Common Failure Modes
- “All traffic blocked”: Default deny without allowing necessary traffic.
- “DNS failures”: Forgetting to allow DNS egress.
- “Label mismatches”: Policies not matching due to incorrect labels.
Conclusion
Network Policies’ maturation in 2018 enabled production-grade network security for Kubernetes clusters. They provided a critical layer in defense-in-depth strategies, complementing RBAC and pod security controls. While NetworkPolicy implementation required CNI plugin support and careful policy design, they became essential for securing multi-tenant and production Kubernetes deployments.
For organizations deploying Kubernetes in production, Network Policies became a fundamental security control. They demonstrated that Kubernetes networking didn’t have to be flat and permissive—it could be segmented, controlled, and secured. Network Policies proved that Kubernetes could support enterprise-grade network security without sacrificing operational simplicity.
The patterns and practices established with Network Policies in 2018 would influence the development of service mesh security and set the foundation for zero-trust networking in Kubernetes. Network Policies demonstrated that network security could be both powerful and manageable, enabling teams to secure their clusters at the network layer.