Ingress-NGINX Retirement Migration Guide: Your 2026 Path Forward

Ingress-NGINX hit end-of-life in March 2026. A practical decision framework and cutover playbook for platform teams still running the retired controller on internet-facing workloads.

By VVV Ops

Ingress-NGINX reached end-of-life in March 2026. If you're still running it on internet-facing workloads — and roughly half the production Kubernetes clusters we audit still are — every new CVE disclosed after April is unpatched by definition. The official Kubernetes SIG Network announcement was blunt about it: no more releases, no more bugfixes, no security updates, repositories going read-only. The Helm charts and images will stay up so existing deployments don't shatter overnight, but that's grace, not support. This is the ingress-nginx retirement migration guide we wish existed eight months ago — a realistic decision framework for platform teams who need to get off, fast, without breaking production.

Why Ingress-NGINX Retirement Matters Right Now

The community was honest about the cause: one or two unpaid maintainers, a flood of feature requests, and a configuration surface (especially the snippets annotation letting users inject arbitrary NGINX config) that became a security liability. The replacement project, InGate, never reached maturity and was retired with the parent. So we're past the point where "waiting for upstream" is a plausible strategy.

The business case for urgency is simple. Running an end-of-life ingress controller on an internet-facing Kubernetes cluster means:

  • Any new CVE is permanent. The next serious header-smuggling or path-traversal bug in the nginx upstream will not be backported. Your SOC 2 auditor will flag it. Your cyber insurance carrier may disclaim coverage.
  • Your detection-to-patch time becomes infinity. We benchmark clients against a 72-hour critical CVE patch SLO. That's not achievable on a codebase with zero maintainers.
  • Compliance frameworks assume vendor support. NIST 800-53 control SI-2 (Flaw Remediation) and the new CRA (Cyber Resilience Act) coming into force in the EU both take a dim view of knowingly running unmaintained software in production. We covered the broader supply-chain angle in our Terraform security best practices for AWS in 2026 post — the same argument applies to any dependency at the edge of your cluster.

Audit your exposure in 10 seconds:

kubectl get pods --all-namespaces \
  --selector app.kubernetes.io/name=ingress-nginx \
  -o wide

If you get any rows back, this post is for you. Before you start moving traffic, make sure the rest of the cluster is stable — our Kubernetes production readiness checklist covers the prerequisites (resource limits, PDBs, health checks) that an ingress migration will mercilessly expose if they're missing.

Your Three Realistic Migration Paths

There are four or five ingress controllers you could migrate to. For 95% of the teams we work with, the choice collapses to three:

  1. Gateway API with a conformant implementation (Envoy Gateway, Istio, Cilium, Kong). The long-term bet. Role separation between platform admins and application teams, first-class support for traffic splitting, rewrites, and multi-protocol routing.
  2. NGINX Inc.'s controller (nginxinc/kubernetes-ingress). Not the same project as the retired community one. F5-maintained, commercial (NGINX Plus) and open-source editions, with a published annotation-mapping guide from the community project.
  3. Traefik with the Ingress-NGINX compatibility provider. The closest thing to a drop-in replacement. Keeps your existing Ingress resources working with minimal rewrites while you plan a longer-term move.

HAProxy Unified Gateway, Contour, Kong Ingress Controller, and cloud-provider controllers (AWS Load Balancer Controller, GKE Gateway, Azure Application Gateway Ingress) are all legitimate options, but they either fold into one of the three categories above or serve a specific niche (cloud-native lock-in, specific protocol needs). Start with the three.

Decision Matrix: Which Path Fits Your Team

We've run four migrations in the first quarter of 2026, each with different team constraints. Here's how we score the three realistic options:

| Criterion | Gateway API | NGINX Inc. | Traefik + NGINX Provider | |---|---|---|---| | Time to first cutover | 4–8 weeks | 1–3 weeks | 1 week | | Rewrite of existing Ingress resources | Required (to HTTPRoute) | Most annotations map 1:1 | Most annotations map 1:1 | | Role separation (platform vs. app teams) | First-class (GatewayClass/Gateway/HTTPRoute) | Annotation-based, weaker | Annotation-based, weaker | | Advanced routing (header rewrite, retries, mirrors) | Native | Requires Plus for some features | Native middleware | | Observability maturity in 2026 | Strong (Envoy stats, OTel) | Strong (commercial tooling) | Strong (Prometheus, native dashboard) | | License cost | $0 (OSS implementations) | $0 OSS / paid Plus | $0 OSS / paid Hub | | Long-term community momentum | High (GA since 1.1, CNCF core) | Moderate | High | | Risk of second migration in 24 months | Low | Low | Medium (if you outgrow annotations) |

Rough heuristic: if you have a platform team with >3 engineers and app teams that submit ingress configs, Gateway API repays the migration cost. If you have a two-person ops team and 80 Ingress resources you need to keep working next Tuesday, Traefik or NGINX Inc. gets you off ingress-nginx fastest. Don't let "the strategically correct choice" block the "get off unpatched software" decision.

Gateway API: The Strategic Choice

Gateway API (gateway.networking.k8s.io/v1) went GA in 2023 and has had three years to mature. The mental model is three resources: a GatewayClass the platform team manages, a Gateway the platform team provisions (binding to a load balancer), and HTTPRoute objects the app teams own. A typical port-80/443 setup with Envoy Gateway looks like this:

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: public
  namespace: edge
spec:
  gatewayClassName: envoy-gateway
  listeners:
    - name: https
      protocol: HTTPS
      port: 443
      tls:
        mode: Terminate
        certificateRefs:
          - kind: Secret
            name: wildcard-tls
      allowedRoutes:
        namespaces:
          from: All
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: api
  namespace: prod
spec:
  parentRefs:
    - name: public
      namespace: edge
  hostnames: ["api.example.com"]
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /v1
      backendRefs:
        - name: api-v1
          port: 8080
          weight: 90
        - name: api-v2
          port: 8080
          weight: 10

The weight field is native traffic splitting — no canary-weight annotation dance. A 2025 CNCF survey (see gateway-api.sigs.k8s.io/implementations) listed 25+ conformant implementations, which is why we recommend this path for any team whose platform will still exist in three years. Budget 4–8 weeks for a careful migration: ~1 week to stand up a second Gateway in parallel, 2–4 weeks to convert IngressHTTPRoute namespace-by-namespace, 1 week for the cutover, 1 week for cleanup.

NGINX Inc. Controller: The Compatibility Path

If the phrase "convert 120 Ingress resources to HTTPRoute" is a non-starter this quarter, NGINX Inc.'s F5-maintained controller is the right answer. Critical point: this is not the retired project. Different repo (nginxinc/kubernetes-ingress), different CRDs, different maintainers, active commercial backing. The official migration guide publishes annotation mappings for ~90% of common community-project annotations; the edge cases are usually snippets and session affinity.

Install with Helm:

helm repo add nginx-stable https://helm.nginx.com/stable
helm install nginx-ingress nginx-stable/nginx-ingress \
  --namespace ingress-nginx-inc \
  --create-namespace \
  --set controller.ingressClass=nginx-inc \
  --set controller.watchIngressWithoutClass=false

Then flip your Ingress resources to ingressClassName: nginx-inc one service at a time. The watchIngressWithoutClass=false line is load-bearing: without it, both the old and new controllers will fight for the same resources during cutover, and you'll see intermittent 502s nobody can reproduce. Been there, bled for it.

Traefik with the NGINX Provider: The Drop-In Path

Traefik's kubernetesIngressNGINX provider, introduced in Traefik v3.3, parses community-project annotations directly. It's the closest thing to a truly drop-in migration: install Traefik, enable the provider, keep your existing Ingress resources unchanged.

# values.yaml for Traefik Helm chart
providers:
  kubernetesIngressNGINX:
    enabled: true
    ingressClass: nginx
additionalArguments:
  - "--providers.kubernetesingressnginx.ingressclass=nginx"

We've seen clients cut over in a single weekend with this pattern. The tradeoff: you're still expressing your traffic config in the vocabulary of a retired project. Plan for a follow-up migration to Traefik's native IngressRoute CRD or to Gateway API within 12 months. Use this as a bridge, not a destination.

The Cutover Playbook: Canary, Shadow, Rollback

Regardless of which controller you pick, the cutover mechanics are the same. Do not do a big-bang swap of ingressClassName. Here's the pattern that has worked on every migration we've shipped:

  1. Install the new controller in parallel on a different IngressClass and, critically, a different set of load balancer IPs. Both old and new run simultaneously for days, not minutes.
  2. Shadow the traffic. Point a low-TTL DNS record at the new ingress and use a header-based canary (X-Canary: true) or synthetic probes to verify the new path handles TLS, long-running connections, and WebSockets correctly.
  3. Cut over one hostname at a time, highest-traffic last. Watch 5xx rates, p99 latency, and connection errors on both controllers. A well-tuned move shouldn't shift p99 by more than 5%.
  4. Keep the old controller hot for 72 hours after final cutover. The rollback plan is literally "change the DNS record back." If you tear down the old controller the same day, you've removed the fallback that makes this safe.
  5. Only then remove the old controller and the nginx-ingress IngressClass.

The parallel-install-and-DNS-flip pattern is why we quote 1–3 weeks for a Traefik or NGINX Inc. migration and 4–8 weeks for Gateway API. The controller install is not the work. The careful cutover is.

Common Pitfalls We've Seen in the Field

Four failure modes we've caught before they caused incidents:

  • Dual controllers fighting over the same Ingress. If both controllers watch ingressClassName: "" (or unlabeled ingresses), you'll get split-brain routing. Set watchIngressWithoutClass=false on the new controller and explicitly label every existing resource with ingressClassName: nginx.
  • Lost client IP in WAF rules. ingress-nginx sets X-Forwarded-For differently than Gateway API implementations. If your WAF (Cloudflare, ModSecurity, AWS WAF) uses client IP for rate limiting, validate it end-to-end after cutover. We've seen rate limits silently apply to the load balancer's IP, effectively disabling abuse protection.
  • snippets annotations silently ignored. The retired project's annotation-based NGINX config injection does not port. If you were using nginx.ingress.kubernetes.io/configuration-snippet for custom headers, redirects, or auth logic, reimplement it as Gateway API filters, Traefik middleware, or NGINX Inc. ConfigMaps before cutover, not after.
  • Cert-manager HTTP01 challenge routing. On Gateway API, cert-manager's HTTP-01 solver needs a separate HTTPRoute for the /.well-known/acme-challenge/ path. If you copy-paste an old config, your cert renewals will start failing 60 days after the cutover. Budget time to test renewal explicitly — don't just check that existing certs work.

The recovery from any of these is fast if you caught it in shadow traffic. It's a page at 3 a.m. if you didn't.

When to Get Help

We've walked four clients through this migration in the last ninety days — two onto Gateway API, one onto NGINX Inc., one using Traefik as a bridge while they plan their strategic move. The common thread: every team underestimated how many Ingress annotations were load-bearing in ways nobody documented, and how many app teams depended on side effects they didn't realize were ingress-nginx-specific.

If your team is staring at a cluster running the retired controller and an April deadline you've already missed, the VVV Ops team can help you scope and execute the migration — discovery audit, decision framework, parallel cutover, and post-migration hardening. We'll start with a 30-minute assessment call to size the job honestly. Whatever path you choose, don't run unpatched internet-facing software into next quarter. The CVE clock is already running.

Tags: ingress-nginx retirement migration guide, ingress-nginx end of life, gateway api kubernetes migration, kubernetes ingress controller, traefik ingress nginx, kubernetes platform engineering