Hello – I asked this on the GCP Slack but didn’t hear much, so thought I’d try here.
I’m trying to understand the best way to deploy nginx as a reserve proxy in front of a python app server (gunicorn), via cloud run.It seems like the most recommended option is a sidecar.
I read through the article on this exact method here (https://cloud.google.com/run/docs/internet-proxy-nginx-sidecar). My hang up with this approach is scaling. I have this application set up in an existing kubernetes cluster so I have some idea of how it scales up. My nginx pods never scale past 2 because nginx is so efficient…even when the gunicorn pods scale up to 100 or more.
In this scenario, would cloud run sidecar option also scale up nginx to 100 instances, when gunicorn does so? That feels weird to me if so, as I’d have a lot of underutilized nginx instances (unless I’ve misunderstood something). Curious if I have that right or not, or anything else I should consider here.
It seems like the other option is to deploy nginx and gunicorn as two separate cloud run services, and use a serverless VPC connector as outlined here (https://chuntezuka.medium.com/connect-2-cloud-run-services-internally-with-serverless-vpc-connector-64e3bdb4d39e). This would keep nginx and gunicorn’s scaling independent. Any drawbacks or issues with doing things this way?
Thanks!