[20222] in Kerberos_V5_Development
Re: [External] Re: kprop across NAT boundaries (patching privsafe)
daemon@ATHENA.MIT.EDU (Jorj Bauer)
Thu Jan 7 10:39:27 2021
From: Jorj Bauer <jorj@temple.edu>
To: "krbdev@mit.edu" <krbdev@mit.edu>, Greg Hudson <ghudson@mit.edu>
Date: Thu, 7 Jan 2021 15:39:03 +0000
Message-ID: <b2c5db96-e077-47ac-b270-0aaa6770fcfd@Spark>
In-Reply-To: <0255671e-0333-3b4e-5676-753180cc0a4e@mit.edu>
Content-Language: en-US
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Errors-To: krbdev-bounces@mit.edu
Content-Transfer-Encoding: 8bit
Thanks for pointing it out. No, it’s not sufficient:
./kprop/kpropd: Incorrect net address while decoding database size from client
Both the source and destination address are changed in flight.
The kprop request (let’s say 10.0.0.10/24) contacts a load balancer (let’s say 10.1.0.10/24), which forwards to an ingress (10.1.3.220/25) which forwards to a pod (10.42.0.224) which is running kpropd in a container. From the kpropd service’s perspective, the source is the ingress (or perhaps the load balancer, depending on specific configuration) and the destination is the pod. From the kprop request’s perspective, the source is the kprop server and the destination is the load balancer.
I think there is one possible architecture where that patch is sufficient to make it work in a kubernetes cluster behind a load balancer - where the load balancer is using SNAT and where the service is not using an ingress in k8s (but is directly presenting on the network). That architecture isn’t feasible for me: it would mean having the entire k8s infrastructure behind SNAT (which creates other intercommunication problems), and it would mean bypassing the k8s ingress that’s being used to load balance while dynamically scaling deployments based on current load.
— j
On Jan 6, 2021, 8:56 PM -0500, Greg Hudson <ghudson@mit.edu>, wrote:
On 1/5/21 11:17 AM, Jorj Bauer wrote:
Because the privsafe protocol bakes in the source and destination address and port, it’s not possible to run kprop through layers of NAT (without doing something that undoes the damage NAT does). In particular, I’m finding this to be one of the problems with being able to run Kerberos “for real” inside Kubernetes, where we have an F5 fronting multiple k8s clusters, whose ingresses fan out traffic to multiple pods inside each.
1.18 and 1.19 beta have this commit:
https://github.com/krb5/krb5/commit/775e496aac2650343ec20826b1ba7f6306a12f3c
Is it not sufficient?
_______________________________________________
krbdev mailing list krbdev@mit.edu
https://mailman.mit.edu/mailman/listinfo/krbdev