Managing complex networking setups requires intelligent routing tools that don’t compromise security or compliance. Today, we’re exploring a highly anticipated capability currently in Preview within Google Cloud Load Balancing (GCLB): SNI routing using TLS routes targeting TCP proxies.
Let’s dive into what TLS routes are, how they work under the hood, and how they solve a major scaling challenge for Private Service Connect (PSC) consumers.
What are TLS routes?
As part of the Network Services API (networkservices.googleapis.com), TLS routes describe a service routing policy that dictates how incoming TLS traffic should be directed to backend services based on Server Name Indication (SNI). Historically, to route traffic based on the requested domain name, you needed an HTTPS proxy to terminate the TLS connection, inspect the request, and route it, whereas an SSL proxy only provided basic 1:1 network connectivity and SSL offload. However, Google Cloud now allows TLS routes to point back at a TCP proxy.
The mechanics of SNI routing without decryption
When a client initiates a secure connection, the very first step is the TLS handshake. Within the initial “ClientHello” packet of this handshake, the client sends an SNI in plaintext, declaring the exact hostname it is trying to reach. By attaching a TLS route to a TCP proxy, the load balancer essentially “peeks” at the unencrypted SNI header in the ClientHello. It uses this SNI value to match against your routing rules and select the appropriate Backend Service.
Why a TCP proxy instead of an SSL/HTTPS proxy?
If you were to point the route at an SSL or HTTPS proxy, the load balancer would have to hold the SSL certificates and decrypt the traffic before routing it. By pointing at a TCP proxy:
-
No Decryption: The load balancer never decrypts the payload. It simply forwards the raw TCP stream.
-
End-to-End Encryption: (m)TLS is enforced end-to-end between the client and the backend service.
This approach gives users the best of both worlds: the intelligent, domain-based routing usually reserved for Layer 7, combined with the strict, end-to-end security posture of Layer 4. This is an absolute game-changer for customers with strict compliance requirements who need to route traffic without intermediate decryption, as well as for customers with non-HTTP workloads.
Here is what the resource looks like in Terraform:
resource "google_network_services_tls_route" "tls_route_1" {
provider = google-beta
project = google_project.producer.project_id
name = "sg-tls-route-1"
location = "global"
target_proxies = [google_compute_target_tcp_proxy.default.id]
rules {
matches {
sni_host = ["service1.ex.com"]
}
action {
destinations {
service_name = google_compute_backend_service.service_1.id
}
}
}
}
The use case: Solving the multiservice PSC endpoint challenge
Now that we understand how TLS Routes work, let’s look at the exact problem they solve: Multiservice PSC Endpoints.
The problem statement
Service providers often expose multiple different services to their consumers using Private Service Connect (PSC). Traditionally, a PSC endpoint is defined by three strict attributes:
-
Protocol
-
Port
-
IP Address
If a provider needed to expose multiple distinct services to a consumer project, the consumer was forced to create multiple PSC endpoints. If you had 10 services, you needed 10 IPs and one port or 10 ports and one IP, either way you end up with 10 forwarding rules. This 1:1 mapping of forwarding rule to service is highly suboptimal, resulting in rapid IP address exhaustion and heavy management overhead for network administrators.
The TLS route solution
TLS routes completely solve this problem. With SNI-based routing, you only need one PSC endpoint accepting traffic on a specific triplet (protocol, port, IP address). When traffic hits the consumer’s PSC endpoint, it is carried over to the producer side where the TCP target proxy invokes the TLS route. Based entirely on the SNI, the traffic is routed to the correct backend service. Even better, those backend services can be serving traffic on completely different ports. One IP address can now seamlessly front an entire ecosystem of services.
Building the test environment
To see this in action, we can use a Terraform deployment that demonstrates publishing multiple services behind a single PSC Endpoint using a CrILB, a TCP Target Proxy, and TLS Routes. Complete TF can be found here. Please follow guidance from the repository to instantiate the environment
Architecture overview
This setup involves two GCP Projects: Producer and Consumer.
Producer Project (The Provider)
This project hosts the actual services and exposes them securely via PSC.
-
Load Balancer: Cross-Region Internal Application Load Balancer (CrILB).
-
Target TCP Proxy: Handles the incoming TLS traffic without terminating it.
-
TLS Routes: Distributes traffic based on the requested SNI:
- service1.ex.com ➔ Routes to Backend Service 1 (Instance Group in region_a).
- service2.ex.com ➔ Routes to Backend Service 2 (Instance Group in region_b).
-
Backends: Two Managed Instance Groups (MIGs) across two regions.
-
Service Attachment: Exposes the Load Balancer to the Consumer project via PSC.
Consumer Project (The Client)
This project accesses the producer’s services privately using PSC.
-
PSC Endpoint: A local Forwarding Rule in the consumer VPC connecting to the Producer’s Service Attachment.
-
Test Client: A bastion (siege) host to securely test connectivity using curl and SNI resolution.
Request flow:
-
users are requesting service1.ex.com or service2.ex.com
-
there is a single PSC endpoint in consumer project
-
forwarding rule originating service attachment is pointing at TCP proxy
-
TLS route contains service routing information based on SNI
-
backend service in producer project
Testing the setup
Once successfully deployed, verify the SNI routing from the Consumer project.
1. Get the PSC Endpoint IP Address:
$ terraform output PSC_CONSUMER_IP
$ "192.168.10.2"
2. SSH into the Bastion Host:
Connect to the siege host in the Consumer project.
$ gcloud compute ssh consumer-siege-host --zone=europe-west2-b
3. Test Service 1 (Region A):
Spoof the SNI request using the --resolve flag to hit Backend Service 1:
$ curl -k --resolve service1.ex.com:443:192.168.10.2 https://service1.ex.com
$
----------------------------------------------
Your IP address: 10.10.200.3 Server Hostname: mig-a-4v85 Server load (1/5/15): 0/0/0 Region and Zone: europe-west2-b
----------------------------------------------
All $_SERVER variable at http://service1.ex.com/server.php
----------------------------------------------
4. Test Service 2 (Region B):
Use the exact same IP address but change the SNI to hit Backend Service 2:
$ curl -k --resolve service2.ex.com:443:192.168.10.2 https://service2.ex.com
$
----------------------------------------------
Your IP address: 10.10.200.4 Server Hostname: mig-b-fhg2 Server load (1/5/15): 0/0.05/0.03 Region and Zone: us-central1-c
----------------------------------------------
All $_SERVER variable at http://service2.ex.com/server.php
As we can see, we see we’re querying the same PSC endpoint: 192.168.10.2 and depending on SNI (service1.ex.com or service2.ex.com) responses we receive are from different backends.
Summary
This implementation of TLS routes for multiservice PSC endpoints represents a significant step forward in simplifying secure, multi-tenant networking within Google Cloud. By leveraging SNI-based routing, you can now consolidate multiple services behind a single PSC endpoint, drastically reducing architectural complexity and the overhead of managing numerous IP addresses. We encourage you to test this configuration in your own environment to experience firsthand how TLS Routes can streamline your service connectivity. Your feedback is essential to us. Please try it out and share your insights, questions, or any technical challenges you encounter as you integrate this into your workflow.
For more details please consult documentation



