@Simelvia@jastes@yanqiang in the Limitations section of the doc, there’s instructions to manually restart the control plane, which must happen after your nodes all run a supported version. Could you confirm if you’ve manually restarted the control plane after the version upgrade completed in your nodes? Just to check, could you try doing that once more and redeploy the Pod to see if that works?
Hi @shannduin , the doc didn’t mention how to actually trigger the manual restart. It only mentions kubectl get nodes which I did and all pods are in the right version.
Thank you for your advise, @shannduin , my deployment is burstable now. But there is still a problem on a matter, why we needed Bursting in the first place. We wanted to be able to allocate smaller resources for our multiple micro-deployments, but that still seems not possible. I applied the exact same file as described in docs for a sample burstable workload, but just specified smaller resources:
Maybe specifying requests above 50m CPU might help. As specified in Resource requests in Autopilot#MimumAndMaximum, (50m CPU, 52MiB Memory) is minimum request for general-purpose compute class.
Following shannduin’s instruction, I was able to request 50m CPU & 52MiB Memory.
Upgrade autopilot cluster.
Node will be auto upgraded.
Do 1 again to manually restart control plane again.
The section that I linked to “Limitations” has the instructions, basically you need to gcloud container cluster upgrade --master the cluster to the same GKE version that it’s already on, which will trigger a control plane restart
Yes, as soon as I deploy the pod (copied from the URL), I get Autopilot Mutator Warning that the CPU resources have been adjusted to meet minimum requirements.
Hey, I gave this a go and confirmed. If I manually adjust the request to 500m and set the limit to a higher value like 750m it works as expected. I’ll check if there’s an explanation and get back to you