I have a Cloud Run Job deploy in the us-west1 region that a few hours ago was working just fine, not recent deploys whatsoever. Around an hour and a half ago, any job execution falls in a pending status without forever, no signs of executions completed either.
I tried redeploying the job with a new image and the issue still persist. I saw a thread mentioning that changing the region might fix the issue, I’m trying that right now (I’m deploying to us-central1). Does anyone else have this issue? Seems I’m not able to open up a case directly with Google to report this issue due to my organization tier.
Hey there! I’m one of the engineers working on the Cloud Run API.
Pending is a state that typically indicates that the user has attempted to run the job multiple times. They either ran out of quota or ran too high a count of executions such that new attempts to run the job are waiting for resources to free up before they start (eg. for other executions finish). However they are not expected to remain in this state forever.
Unfortunately it is challenging to know what’s going on precisely with your job without additional identifying information. If you’d like to message me directly I may be able to find out whats going on or unstick this instance.
Hey whaught! Thanks for clarifying on why the jobs falls under a pending status. Fortunately, the issue has been solved, my project was indeed hitting a resource quota limit on the regional instance limit for cloud run. For unknown reasons, our quota limit dropped from 1,000 to 10 instances per region. We filled a ticket with support, and they reestablish the limits to 1,000 again.
We already set up alert policies for our most used services, so we’re prepared when this happens again. Thanks again for replying!