“Can the same Apigee organization be used to serve APIs to internal and external clients?” This or similar questions come up in most architecture discussions when planning out Apigee deployments. The answer to this question requires us to take a holistic view at traffic isolation, firewalls, restricting permissions to access certain APIs and the visibility of these APIs on a developer portal. This article primarily focuses on the traffic isolation aspect and starts with a discussion on the pros and cons of handling internal and external API traffic in the same Apigee organization compared to using dedicated organizations for each. In the second part we explore practical recommendations for deployments where a single Apigee organization is used to handle both kinds of traffic.
Internal and external API traffic
Many API use cases distinguish between APIs that are exposed to external consumers and APIs that are limited to internal consumers. This differentiation depends on the definition of internal and external traffic and can be done at different layers of an ISO/OSI stack.
For example, internal APIs can refer to APIs that are only available to consumers that are authenticated through accounts that are issued to identities that are internal to the enterprise. In this case the APIs that enforce the internal account credentials are considered internal because they are only accessible to internal stakeholders.
Compared to this example where the internal vs external distinction is done at the application level, many use cases additionally require a separation of internal and external API traffic at the network level. This traditionally meant that clients have to be part of the internal network zone as opposed to external clients that are entering through a DMZ. In cloud native implementations this usually means that internal APIs should only be accessible through internal load balancers within a private IP space whilst external APIs are available through external load balancers.
Why consider sharing the Apigee organization in the first place?
Apigee organizations are one of a number of Apigee constructs to achieve a separation of different kinds of traffic. The different kinds of traffic can be viewed as two separate tenants where one tenant is concerned with offering external and another one with offering internal APIs. Some concepts are therefore similar to how multi-tenancy allows isolation for different business units. Customers can choose to implement multi-tenancy at several levels to meet their different requirements. The relevant Apigee constructs that are relevant here (ordered from highest to lowest degree of isolation):
- Apigee Organization where each tenant has a completely separate Apigee organization.
- Apigee Instance where each tenant is run on its own runtime cluster
- Apigee Environment Group where each tenant has a dedicated hostname
- Apigee Environment where each tenant has dedicated runtime compute resources
- Apigee Proxy where each tenant has its unique hostname + base path combination
- Apigee Proxy Logic where the tenant nature is established through flow variables at runtime
An in-depth description of these constructs can be found in this community article and is therefore not repeated in the current article.
In this article we zoom in on the different options customers have when choosing to serve their internal and external clients from the same Apigee organization. Common reasons for sharing an organization for internal and external traffic include:
- The Apigee license entitlements determine the number of organizations that can be created
- Unified API developer experience for internal and external APIs (Different portals for internal and external are still possible and common practice)
- Same API client credentials can be used for internal and external APIs if desired
- Unified operations reduces overhead compared to standalone organizations.
- Unified analytics to have technical and business metrics in a centralized place.
Challenges with using the same Apigee Organization for internal and external traffic
The downside of the aforementioned simplifications is that the isolation between the internal and external API and their traffic is incomplete:
-
The Apigee Organization-level permissions apply to both internal and external APIs. IAM conditions can be used to limit the permissions of custom roles to only certain organization-scoped resources but the primary scope remains the Apigee Organization.
-
Because the data is replicated across all instances or clusters of an Apigee Organization, persistently stored runtime data such as app credentials are shared across all runtime deployments.
-
In Apigee X, the service network peering is done at the organization level. This means that you specify a single peering VPC at the time you provision your organization and cannot use different VPCs for for service networking for internal and external traffic:
{ "name": "my-org-network", "authorizedNetwork": "network-to-peer-with", //… }
Isolating internal and external traffic at the Apigee instance level
When the benefits of a unified control plane outweigh the limitations of sharing an Apigee organization as described above, we can start thinking at the right abstraction level for separating internal and external traffic within the same Apigee organization. The next lower abstraction level is an Apigee Instance which is a regional runtime deployment in Apigee X or a separate Kubernetes cluster in Apigee hybrid. Creating different instances gives you the ability to explicitly associate environments with a specific runtime type. In practice this often means that customers have dedicated Apigee Environments and Apigee Environment Groups for each instance type.
As the instances in Apigee X are always peered with the same VPC but the clusters for Apigee hybrid can be located in different VPCs, as long as the required cross-cluster connectivity is available, we have to look at the Apigee instance-based isolation separately for X and hybrid.
The Apigee X instance represents a runtime deployment in a specific region and has a dedicated IP range that it uses for all traffic sent through the peering connection. We create an Apigee instance that we call INTERNAL and another one that we call EXTERNAL. For the internal route we create an internal HTTPS load balancer together with a managed instance group (MIG) as a backend that forwards traffic to the INTERNAL instance’s endpoint. This MIG can then be used to put firewall restrictions in place to restrict traffic to and from each INTERNAL instance. For the EXTERNAL instance we use a private service connect network endpoint group (PSC NEG) together with an HTTPS external load balancer that allows us to route the external traffic through a standalone VPC that can be different from the one that the Apigee instance is peered to.
The instances in the diagram above are located in different regions because Apigee only allows for one instance per region. This means that a separation of internal and external traffic requires the two traffic paths to be processed in two different regions.
The isolation for Apigee hybrid is slightly simplified as dedicated clusters for internal and external traffic are supported within the same region. Clusters can also be located in the same region and are only subject to the limit on total clusters per Apigee organization. As with all multi-cluster installations, one Cassandra ring must span across all Apigee clusters in order for the Apigee organization to function correctly.
Isolating internal and external traffic at the Apigee Environment Group level
When creating dedicated instances is not feasible or not desired because of the constraint of only one instance per region, the next best option is to create dedicated Apigee Environment Groups for internal and external traffic. This allows you to use the same Apigee instance endpoints or PSC service attachments with different hostnames.
Internally Apigee will route traffic to the environments in the specified environment groups depending on the hostname. Because environments can be attached to multiple environment groups, you can create environments that are exposed internally and externally. For instance if you have a generic currency conversion service that is offered to internal and external API consumers, this could be offered as api.example.com/conversions and internal.api.example.com/conversions where both calls are handled by the same Apigee runtime environment.
In cases where this is not desired (e.g. to physically isolate traffic from internal and external consumers to prevent resource exhaustion on internal APIs), you should create dedicated environments for the internal and external APIs.
The following diagram shows a setup where the stack of environment groups and environments are separate and different proxies are deployed for internal and external stakeholders.
An important thing to highlight here is that extra precautions should be taken into consideration when the same instance is used to handle external and internal traffic. By relying on the hostname that is sent by the client to decide if the traffic is meant for the internal or external environment group you have to ensure that clients do not attempt to trick the Apigee routing by sending host headers that point to the internal environment group on through the external load balancer.
One way of preventing this is to explicitly override any incoming host header at the HTTPS load balancer such that all traffic that comes in at the external load balancer is automatically sent to the external environment group. For requests for proxies that are not deployed on the external environment group this would result in a 404 error from the Apigee runtime.
When there are multiple hostnames that should be routed to an Apigee instance you should implement the route rules in a way that the default route routes to a null backend (e.g. an empty storage bucket) and only route the hostnames that are associated with the externally reachable environment groups to Apigee.
Isolating traffic at the environment or proxy level
It is technically possible to differentiate between internal and external traffic at the application layer with an API Proxy, but with the caveat that the setup is subject to the implementation of the individual proxies and can’t be easily enforced at the network layer. At this point the semantics of internal and external traffic shifts from a network layer view to an application layer view of differentiating between internal and external consumers.
Isolating traffic at an environment level within the same environment group means that clients must use different base paths for internal and external APIs like api.example.com/internal/foo or api.example/bar because the base paths within the same Apigee environment group cannot be overlapping. In most cases internal and external environments are better served out of different environment groups as described above. As the number of environment groups does not impact an Apigee license, splitting environments in internal and external environment groups is also cost-neutral.
For cases where internal and external stakeholders are handled differently at the proxy level, a request pre-flow hook can be useful to assure the internal nature of a request. This is ideally through cryptographic mechanisms like certificates and or token claims. Any traffic that passes through the flow hook is considered to be of internal nature and permitted.




