Konnectovity agent error

Hello,

I have a GKE cluster with an issue with Konnectivity agent that is unable to connect to port 8132 of the control plane. When I perform a netcat on the public IP, the port is open. However, when I do the same on the private IP (the one used by the konnectivity-agent), it timeout. I’ve checked all my firewall rules, and there is no filtering in place. What can i do to fix this error, in order to allow access through proxy to my applications ?

Error : “cannot connect once” err=“rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 10.132.0.56:8132: i/o timeout"”

Thanks in advance,

Regards

@Neferites Do you have Private Google Access Turned on in the subnet where the GKE cluster is running?

Yes Private Google Access is turned on. 2 others cluster with the same configurations are running fine in that network.

I tested access from a worker node to the internal IP of the control plane on port 8132, and it ends up with a timeout. If I do the same with the public IP, the port is accessible.

Which GKE version are you running? And is it the same version as the clusters which are “working”?

Thanks for the additional information but it does not answer my
question.Could you check if the private google access is turned on?
Happy to hop on a quick call to take a look

Munish Gupta

Chief Information Officer (CIO)

8 X GCP Certified

Cloudwerx.tech <> | Premier Google Cloud Partner

Hi @Neferites ,

Based from the description of your problem, it could be an issue with the network configuration or routing issue within your cluster. You can check the following to see if what causes the private IP address to time out when being accessed:

  • GKE Cluster Network Policy and Firewall rules : GKE’s default network policy and firewall rules might be blocking communication between nodes. Also make sure that they allow outbound traffic on port 8132.

  • GKE Node Tags : Check if the Konnectivity agent’s communication is allowed by GKE node tags. It must use the appropriate GKE node tags in your cluster.

  • GKE Cluster Upgrade : Check if the versions are updated. If not, consider upgrading your GKE cluster to a more recent version.

  • Instance Metadata : Check if there are missing or incorrect metadata. Cluster have necessary instance metadata and permissions to communicate with the control plane.

If after checking, the issue is still occuring, I suggest you create a support case and contact support for better checking of the issue on your project.

Hello,

  • GKE version : 1.26.5-gke.2700 same as others clusters.
  • private google access is turned on
  • Firewall rules have been checked, and there is no netpol.
  • Node tags are the same between clusters
  • No metadata are missing.

Regards