Filestore multishare instances not cleaned up from GKE when all PVs are removed

We’ve noticed an issue where Filestore instances that have been auto-created for GKE storage PVs are not always getting cleaned up when the corresponding PVs have gone away (and have been away for days, weeks, or months). These are somewhat expensive resources, so we’d prefer that they go away - any advice on how to make them go away or find out what’s holding on to them? As best we can tell from the GKE side they’re unused, and the console/gcloud end of things doesn’t enumerate individual shares for these.

I can, for example, find all of the PVs in my cluster with a command like

kubectl get pv -o json | jq -cr '.items | map(select(.spec.csi.driver=="filestore.csi.storage.gke.io")) | map({name: .metadata.name, vh: .spec.csi.volumeHandle}) | sort_by(.vh) | .[]'

and see that I have 17 PVs that reference 9 different Filestore instances (as seen in the volumeHandle). Butgcloud filestore instances list shows 17 instances, not 9. (I think the number of PVs being the same as the number of instances is a coincidence - we’re using multishare, and as mentioned, many of the PVs share the same volume handle/instance reference.)

We’re following these directions for Filestore multishares on GKE: Filestore multishares for GKE  |  Google Cloud Documentation

Hey,

Hope you’re keeping well.

Filestore multishare instances created by GKE’s Filestore CSI driver aren’t automatically deleted when PVs are removed — GKE only manages the share lifecycle, not the underlying instance. If the instance is still present, it usually means the CSI driver hasn’t issued a delete because something still references it, such as a lingering PVC, StorageClass, or finalizer in Kubernetes. I’d recommend checking kubectl get pvc –all-namespaces for any orphaned claims and inspecting PV/PVC metadata for finalizers that might block deletion.

Thanks and regards,
Taz

Hi. Quoting from the doc I linked:

When you delete a PV, the GKE Filestore CSI driver reclaims the allocated share storage and removes the share. The GKE Filestore CSI driver also deletes the Filestore instance if all associated shares have been deleted

So I do expect cleanup.

I have looked at all of the PVs and PVCs associated with the Filestore CSI on all of the GKE clusters in the project, and I can see what instances they’re associated with - and then there are several instances remaining.

What I would really like here is some visibility into the instances, so that I can trace resources forward, and find out if there is something surprising lurking.