GKE Persistent Volume deleted but disk still atached. Stuck

We installed a helm chart for elasticsearch wich created two PVC and PV but something went wrong and the pods where in error state:

0/24 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims

So, we deleted everything but It was stuck on terminting state. After this, the only solution we found was to delete the resources with --force option. Now, the namespace is empty but the disks are still in GCloud and we can NOT delete them cause they appear as been used by one of the GKE nodes

We don’t know what else to try, but we need to delete those disks as we are being charged for them.

Thanks in advance!

1 Like

Can you try deleting the namespace and then deleting the disks?

Nothing changed. The disks are still being used by the GKE nodes

After deleting the namespace, how did you try to delete the nodes? CLI or UI/Console?

Do you mean how did I try to delete the disks?

I tried in the UI/console but the button is not available since they are still attached to the GKE node.

I can try via CLI but I guess the result will be the same

I tried with the CLI and I get:
The disk resource ‘projects/*/zones/europe-west1-b/disks/pvc-’ is already being used

I managed to solve It with this command on the node that had the disk attached:

gcloud compute instances detach-disk my-instance --disk=my-disk

+1 that’s how I would have suggested doing it.

After force deleting resources you’ve often orphaned GCP objects from Kubernetes — they’re no longer represented in k8s so no k8s controller is going to know about them. So you have to use the GCP api directly.

Fortunately everything our controllers do (the gce-pd and/or PD CSI driver in this case) use the public api, so there’s no magic needed. The GCP control plane lifecycle of a volume (create, attach, detach, delete) can all be done by hand with the compute API as you’ve just discovered.

1 Like