Websocket internal communication between GKE Standard services with session affinity, how can i achieve it?

App is deployed on GKE standard usecase is building an agentic commerce app meaning AI agents place shopping orders on behalf of human users

We have 2 pods , executor pod think of it as the app pod that invokes another POD called browser-farm pod which is running instances of headless chrome. Both the PODS are internal only and communicate via cluster IP. Both have HPA enabled.

The challenge here is executor POD needs session affinity with the browser farm. Think of an agent placing an order from the Chrome browser autonomously , it has to stick with that browser session so this is what we are trying to achieve (session affniity)

Question is how can executor POD main session affinity with browser POD. We are using NGINX ingress internal LB provisioned as a POD with NGINX session affinity annotation. The issue is that web socket connections from executor POD to browser-farm are failing with HTTP 400 error. Interestingly without the NIGINX ingress calls are working fine.

Maybe we look for a more simpler GCP native solution (not nginx ingress). I see that LB supports session affinity and also maybe we use custom headers to maintain session affinity
More details on the issue below

The Problem is that default load balancing randomly routes WebSocket connections initiated by the Executor service to different chrome pods, causing “Session not found” and “Broken CDP connections.” To guarantee session stability, we require custom headers to achieve stickiness.
The Solution Implemented involves header-based affinity: the Executor sends a custom HTTP header, which the NGINX Ingress controller uses to consistently route traffic. The controller also rewrites the host to localhost to satisfy the backend.
However, the Technical Failure persists. Despite configuring global fixes to preserve the essential Connection/Upgrade headers for WebSocket handshakes, the NGINX core logic persistently strips these headers. This configuration conflict results in a consistent400 Bad Request from the proxy, confirming the global fix is being overridden at the NGINX level.

https://cloud.google.com/load-balancing/docs/backend-service#session_affinity
https://cloud.google.com/load-balancing/docs/https/custom-headers