One of the big differences between Apigee Edge, the long-running SaaS offering from Google, and Apigee X, the new offering, is the networking flexibility that Apigee X brings. Apigee X runs all within Google Cloud: the runtime within a VPC managed by Google, which is peered to a VPC managed by the customer. This means customers can control ingress into the API Gateways, and egress from the gateways, without managing the gateways themselves. Even so, designing the network to accommodate the requirements of an enterprise requires forethought and planning.
Shared VPC networks are a Google Cloud networking capability that allows customers to implement a centrally governed networking infrastructure. A single VPC network in a host project can thereby be used to connect resources from multiple service projects.
Why would I want to use Shared VPC networks with Apigee X?
With respect to Apigee X networking, the benefit from implementing a shared VPC infrastructure comes from the fact that the Apigee X runtime is peered with the customer’s own VPC. This way the exposed Apigee runtime endpoint can easily be reached from the customer’s VPC network and connect to the southbound backends in that same VPC.
Of course, the IP address space needs to be allocated carefully in advance because VPC peering cannot handle overlapping IP ranges. In many scenarios though, the situation described above is overly simplified as the VPC network is part of a single GCP project and most organizations will want to follow best practices around resource hierarchies and separate their infrastructure into multiple projects. This means that the network described above would be disjointed from other networks in other projects. Imagine a network topology where we have the Apigee X tenant project peered to a VPC network as before, but have internal backends located in another project’s VPC. As shown in this diagram:
If we choose to peer the Apigee VPC to the Backend VPC as shown by the lower peering connection, then the backend can be reached from the Apigee network and vice versa because network peering is symmetric. The Apigee X tenant project however can only communicate with the Apigee VPC as the peering is not transitive as described in the VPC peering documentation. To achieve this flow you could deploy additional proxies in the Apigee VPC to forward traffic across the peering link to the backend VPC. However, this adds additional operational overhead and maintenance effort.
For this reason this article focuses on describing how customers can use shared VPCs to establish connectivity between the Apigee X runtime and backends that are located in another GCP project under the same organization without the need for additional networking components. In scenarios where the backends are located in another GCP organization, or in a private datacenter, or in another public cloud, you would of course have to rely on traditional means for establishing hybrid network connectivity and routes propagation.
How can I set up a shared VPC host and service project?
To set up a shared VPC we first have to distinguish between the host project and the service projects. The host project is the project that owns the VPC and defines subnets that are shared with the service projects. The service projects in turn use these subnets to create their resources like Compute Engine VMs or GKE clusters.
The shared VPC provisioning documentation provides step by step instructions for how to authorize a user as a Shared VPC admin by giving them the roles/compute.xpnAdmin and roles/resourcemanager.projectIamAdmin roles. Afterwards the admin needs to create the VPC network and subnets and share them with the right service projects as needed. For an initial Apigee network architecture we will create a VPC network that contains two different subnets one for the backend services and one for the Apigee ingress proxies and share them with the corresponding GCP projects in the same organization.
How can I deploy Apigee X in a shared VPC network?
First, we have to create a peering connection to the Apigee runtime at the VPC level. For that we create a peering range that does not conflict with the existing IP allocations in that network and has an IP range that satisfies the Apigee X peering range requirements.
Once the peering is established we can start to deploy the ingress proxies and an HTTPS Load Balancer in the Apigee service project’s shared subnet. This allows the Apigee API Gateway to be exposed for external consumption. In the backend service project we can also provision the backend services in their own shared VPC subnet.
Deploying Apigee in a shared VPC with the Apigee DevRel Provisioning Script
To facilitate the deployment in a shared VPC scenario the Apigee DevRel repository contains a provisioning script that automatically configures the required topology.
The script assumes that you have the following setup already prepared:
- A host project that controls the service project.
- A service project that will be used to own the Apigee X organization and provision the ingress proxy and load balancer.
- A shared VPC (see above)
- A subnet that is shared with the Apigee service project
To start you will need to check out the DevRel repository and configure your project names
git clone [https://github.com/apigee/devrel.git](https://github.com/apigee/devrel.git)
cd devrel/tools/apigee-x-trial-provision/
export HOST_PROJECT=<your-Apigee-Host-Project>
export SERVICE_PROJECT=<your-Apigee-Service-Project>
To configure the VPC peering of Apigee X and firewall you first configure the host project:
export NETWORK=<your-shared-vpc>
export SUBNET=<your-shared-subnet>
export PEERING_CIDR=10.111.0.0/23
./tools/apigee-x-trial-provision/apigee-x-trial-provision.sh -p $HOST_PROJECT --shared-vpc-host-config --peering-cidr $PEERING_CIDR
This should yield an output similar to this one where the NETWORK and SUBNET represent the fully qualified paths under the host project:
export NETWORK=projects/$HOST_PROJECT/global/networks/$NETWORK
export SUBNET=projects/$HOST_PROJECT/regions/us-west1/subnetworks/$SUBNET
Make sure you execute these exports and continue with the following script (adding any customization options you need):
./tools/apigee-x-trial-provision/apigee-x-trial-provision.sh -p "$SERVICE_PROJECT"
When the script is finished it creates a sample proxy in your Apigee environment and prints a test curl to STDOUT.
# To send an EXTERNAL test request, execute following command:
curl [https://10-111-111-111.**bleep**.io/hello-world](https://10-111-111-111.**bleep**.io/hello-world) -v
To test a backend in another shared VPC subnet you would create a second subnet in the host project as described above and use its private RFC 1918 IP address as the target URL for your API proxies.
Thanks to Tyler Ayers and Yuriy Lesyuk for their feedback on drafts of this article!




