Sorry for the slow reply, @Sai Saran Vaidyanathan. We’ve been heads down implementing this. In response to your earlier questions/points:
the client now makes call to two different systems - Curity (for the token) and Apigee (for the microservice).
This is normal OAuth. My app might consume a dozen APIs. They’re all different from the OAuth server, so the client has this burden to bear.
We’re planning to do a fire-and-forget from Cuirty to Apigee during token issuance, so it won’t impose any performance impact on the OAuth clients interacting with Curity. The introspection call from Apigee to Curity will be cached for as long as Curity says to do so. We’ve parsed the Cache-Control response header and used Apigee’s normal caching operations for that.
Because of this, how are the apps authenticated separately?
With separate tokens granted separately. Confidential clients will have different client IDs and secrets (unique per installation even).
Are we planning to store the app information on both the systems? how about the Client IDs and secret -will they be the same across the two so that the calling apps don’t have to muddle with two different creds?
Apigee will only know about the client ID, not the secret. The latter will only be in Curity. We’re going to make an API proxy in Apigee that fronts Curity’s Dynamic Client Registration (DCR) endpoint. The developer portal will only interact with this standardized API. In this Apigee proxy, it’ll forward the call to Curity, store the client ID that Curity generates in the Apigee database, and return Curity’s DCR response to the portal. Consequently, the app will only have one credential and there will be only one secret that is hashed and stored in one database – the database of the OAuth server, Curity.
I recommended having Apigee as the point of network for the App while Apigee internally communicates with Curity.
I think it’ll work to have Apigee proxy DCR, but not much else. The P2P calls, like revocation, introspection, token issuance could be done through a proxy, but we would not recommend that customers do that for anything more than WAF-type protections. Any multi-channel OAuth flow (e.g., hybrid OIDC, code flow, device flow, assisted token flow, etc.) will not work with a proxy in the middle.
Curity can consider Apigee as a client and enable security mechanisms for any communication between these two systems
We’ve built systems like this before we had Curity. It makes the API gateway a honeypot – a God client, and isn’t a pattern I’d recommend, especially now that we have Curity 
not sure why you recommended a REF token to be stored in Apigee, we could store the actual token itself ?
There’s an important reason for this that isn’t immediately obvious: revocation. A token can be revoked at any point. To avoid more cross system communication, the “Apigee token” approach will avoid the need to call over to Apigee from Curity when a token is revoked. Furthermore, identity data changes and scopes expire (Curity allows for a TTL on scopes that are < the expiration of the tokens). By always issuing a REF token, the API proxy will always dereference it and get the most-up-to-date info. Furthermore, a REF will ensure that the app isn’t effected by GDPR and other such regulations because it’ll never have any PII. When you put this together with a cache and a fire-and-forget approach to storing the “Apigee tokens,” we expect very good performance and throughput.
for every Microservice call, I see the network hop happening between Apigee and Curity - for introspection. Do we really need that ?
Curity’s REF tokens are globally unique. Therefore, the cache in Apigee is a Global one. This means that all microservices will share that cache. Thus, one microservice’s call to Curity can be used by another.
Why not store the token data in Apigee itself and the app pass that in the header ?
I think this would work in some deployments, but it’s not as dynamic as the other I’m advocating here. The policies that we’ll publish on GitHub will use the “Apigee REF token” idea, but I wouldn’t be surprised if some customers who aren’t concern about data becoming unsynchronized after revocation will take this alternative approach.
Apologies if I am missing something
No need! We’re Apigee babies over here, so we really appreciate your help!
were you able to do any POC ?
Our PoC is ongoing and is supposed to finish by the end of the year. We ran into a snag with some odd errors. That’s cost us a lot of time, but we have a meeting on Monday with some Apigee folks to help us get that cleared up. We’ll keep updating this thread as we continue.
Thanks again for everything so far.