NGINX Ingress Controller: Routing Kubernetes Traffic in 2016

Table of Contents
Introduction
The Kubernetes Ingress API landed in 1.2, but early controllers were experimental. By mid-2016, the community-backed NGINX Ingress Controller emerged as the default choice for production routing—combining request-based routing, TLS termination, and a rich annotation set.
What made it compelling wasn’t only features—it was familiarity. Many teams already knew how NGINX behaved under load, how it failed, and how to debug it at 3am. That operational “muscle memory” mattered as much as the Ingress API itself.
Historical note: the sample below uses the 2016-era Ingress API (
extensions/v1beta1). The modern, GA API isnetworking.k8s.io/v1, but the routing concepts and controller architecture are the same.
Controller Architecture
- ConfigMap Driven: Controller watches Ingress resources and ConfigMaps to render an NGINX config template.
- Pod Deployment: Runs as a Deployment/DaemonSet with host networking (or NodePort) to listen on ports 80/443.
- Reload Strategy: Writes configuration to
/etc/nginx/nginx.confand triggers lightweightnginx -s reload. - Status Updates: Reports the ingress IP/hostname back to resources so DNS automation can pick it up.
Key Features (2016)
- Path & Host Routing: Match based on virtual hosts and prefix/regex paths.
- TLS Termination: References Kubernetes Secrets containing cert/key pairs; supports SNI for multi-domain TLS.
- Sticky Sessions: Enabled via
nginx.ingress.kubernetes.io/affinity: cookie. - Rate Limiting & Auth: Basic authentication, whitelist annotations, and request rate limits via annotations.
- WebSocket & gRPC Support: Upstream upgrades handled automatically.
Sample Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- cafe.example.com
secretName: cafe-tls
rules:
- host: cafe.example.com
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
Operational Guidance
- Default Backend: Provide a catch-all Service (usually 404) for unmatched hosts.
- SSL Certificates: Automate TLS via kube-lego or Let’s Encrypt scripts; wildcards required external tooling.
- ConfigMap Tuning: Set global options (
proxy-body-size,proxy-read-timeout) using the controller’s ConfigMap. - Namespace Isolation: Use admission controllers or policy to control which teams can create Ingress objects.
- Logging/Monitoring: Enable access logs and export NGINX metrics via the
nginx-prometheus-exporter.
Common gotchas
- Health checks vs. default backend: cloud LBs can mark your ingress “unhealthy” if the default backend returns unexpected codes.
- Annotation drift: a single per-Ingress annotation can override a safe global default—treat annotations like code and review changes.
- Reload storms: lots of Ingress churn can cause frequent reloads; watch config update rate and controller CPU.
Ingress API Evolution
- extensions/v1beta1: Initial API lacked advanced concepts like
pathType. - Future Directions: SIG Network began drafting GA requirements—eventual
networking.k8s.io/v1, richer status fields, and multi-protocol support. - Alternatives: Traefik, HAProxy, and GCE ingress controllers offered options, but NGINX had the broadest community backing.
Conclusion
The NGINX Ingress Controller delivered a pragmatic, feature-rich gateway for Kubernetes workloads in 2016. Its annotation ecosystem and compatibility with existing NGINX expertise made it the de facto ingress solution until the broader Ingress ecosystem matured.