persistent disk quota reached but can not figure out where its being used

my quota is 4096 GB and its reached after I triggered the serverless spark job ( which failed) but its still shows me that its being used and I can not trigger another due to error "Insufficient ‘DISKS_TOTAL_GB’ quota. Requested 1200.0, available 46.0. "

I checked all “disks” sections under compute / baremetal etc… and I dont have such disk there.
all i have is 5 VM using about 100 GB total ( gcloud compute disks list shows the same )

Has anyone faced such issue ? anyway to resolve this? pls help!

This may be a case of resources that are not properly deleted. You may consider checking the Persistent Volume or Disk to validate. Also check Persistent Volume Claims (PVCs) that could be stuck in Terminating status. Attached are documentations for reference that can be helpful in your use case. [1][2]

[1] https://cloud.google.com/compute/docs/disks
[2] https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims

I did, apperantly the issue was it took about 2+ hours to get the usage metrics updated so I was not able to submit new job during that time, It is udpated now and issue is resolved, just needed to wait couple of hours :slightly_smiling_face: