-
Notifications
You must be signed in to change notification settings - Fork 70
Description
Problem
The CCM does not set the ipMode field on LoadBalancerIngress entries. Without this, kube-proxy binds the LoadBalancer IP to every node and intercepts traffic destined for it, bypassing the NodeBalancer entirely for
cluster-internal requests.
This causes a well-known class of failures when proxy protocol is enabled on the NodeBalancer: internal traffic (e.g. cert-manager HTTP01 validation, in-cluster requests to LoadBalancer IPs) reaches the ingress controller
without the expected PROXY protocol header, resulting in broken header errors and failed requests.
See: cert-manager/cert-manager#466
The current workaround is deploying hairpin-proxy, which intercepts DNS and injects PROXY protocol headers for internal traffic. This shouldn't be necessary.
Solution
KEP-1860 added an ipMode field to LoadBalancerIngress with two values:
VIP: kube-proxy binds the LB IP to nodes (current default behavior)Proxy: kube-proxy does not intercept LB traffic, forcing it through the actual LoadBalancer
The CCM should set this field based on proxy protocol configuration. When all ports on a service use proxy protocol, ipMode should be Proxy so that kube-proxy doesn't short-circuit traffic around the NodeBalancer.
Otherwise it should be VIP.
A manual override annotation (service.beta.kubernetes.io/linode-loadbalancer-ip-mode) would also be useful for edge cases.
Reference implementation
I've put together an implementation in this commit that:
- Adds a
getIPMode()helper that auto-detects from proxy protocol config or reads an annotation override - Sets
IPModeon allLoadBalancerIngressentries inmakeLoadBalancerStatus()(all three return paths: hostname-only, IPv6, default) - Includes tests and documentation
The Kubernetes API types (LoadBalancerIPModeVIP, LoadBalancerIPModeProxy) are already available in the k8s.io/api version used by this project.