Hello Team,
I’m currently using GKE Standard Tier Zonal Cluster Version 1.30.9-gke.1127000. I have a service deployed via Load Balancer IP that utilizes over 30 ports. During a port scan, I discovered that additional ports are also exposed through the same Load Balancer.
While troubleshooting, I found GCP documentation indicating that if more than 5 ports are exposed, the Network Load Balancer will expose all TCP ports on that LB. This explains why the GKE Node’s open ports are visible when scanning the IP.
I’ve included the Nmap scans for both the GKE Node and the GKE Load Balancer below for reference.
My question is: how can I close these additional ports on GKE or prevent them from being exposed via the Service Load Balancer?
Ref Documents -
https://cloud.google.com/load-balancing/docs/internal#all_ports
Nmap snap of GKE Node -
nmap -T4 172.30.X.1
Starting Nmap 7.92 ( URL Removed by Staff ) at 2025-03-08 12:51 IST
Nmap scan report for gke-pa-private-gke-u-gke-uat-n2d-std–89b06e39-zctf.asia-south1-a.c.myproject.xyz.internal (172.30.X.1)
Host is up (0.00031s latency).
Not shown: 991 closed tcp ports (conn-refused)
PORT STATE SERVICE
22/tcp open ssh
987/tcp open unknown
2020/tcp open xinupageserver
2021/tcp open servexec
7001/tcp open afs3-callback
7002/tcp open afs3-prserver
12000/tcp open cce4x
31038/tcp filtered unknown
31337/tcp filtered Elite
GKE LoadBalance Nmap Snap -
nmap -T4 172.30.X.2
Starting Nmap 7.92 ( URL Removed by Staff ) at 2025-03-08 12:52 IST
Nmap scan report for 172.30.X.2
Host is up (0.00058s latency).
Not shown: 992 closed tcp ports (conn-refused)
PORT STATE SERVICE
22/tcp open ssh
111/tcp open rpcbind
987/tcp open unknown
2020/tcp open xinupageserver
2021/tcp open servexec
7001/tcp open afs3-callback
7002/tcp open afs3-prserver
12000/tcp open cce4x