GKE upgrade -- how does it handle pod movement?

So basically, let’s say we have a cluster, and in that cluster a node pool with 3 nodes. Let’s say we have a deployment that has 1 replica, so 1/1.

What would happen to that deployment in detail during an GKE upgrade? From what I know, GKE picks a node to upgrade, so it condones it, then creates a new node with newer version, then after it’s done, it moves over the pods over to the new node by destroying/re-creating correct?

My question though is that the service that the pod provides (whatever that may be) will be disrupted correct? Due to there only being 1 replica, OR, does GKE force create a 2nd replica on the new node, so the deployment becomes 2/1, and then after that is done it deletes the pod in the old node putting the deployment back to 1/1 without any disruption?

I’m wondering whether I should force all deployments with 1/1 replicas to 2/2 during the upgrade via kubectl, and then put them back to 1/1 replicas after the upgrade has finished, or whether GKE kinda does this under the hood?

Ah ok, for future reference, my above thinking was true, you have to have at least 2 replicas to prevent the service from becoming unavailable, potentially more if it’s more critical, just to make sure.

Discruption budgets don’t seem to be necessary.