Hi satyendrasrivas
We have some strange situation where in some MP node java callout policy is thorwing this error “Could not find PublicKey for provided keyId” where as in some MP nodes it is working fine. Can you explain why it is behaving ?
When you use the phrase “java callout” are you referring to THIS Java callout? https://github.com/apigee/iloveapis2015-jwt-jwe-jws
If yes, then… why are you using that? You could be using the built-in Apigee policies that verify JWT. [doc link]
The first thing I would do is double check that the keyid that cannot be found is the same keyid in both cases - the success and the failure cases. If the keyid is different, then … maybe the error is “correct” and expected.
As regards the error that you see on some MP nodes but not others… no, I am sorry, but I do not have any explanations for why it would behave differently on different nodes. I guess you have an OPDK installation, is that right? In that case, I would look in the system.log for the Message Processors, to see if there is any logging there that would indicate an unexpected error in the runtime.
If you are using OPDK, one possibility is that the required Java configuration is different across the different nodes. This won’t happen with a hosted Apigee (Apigee Edge or Apigee X), because Google insures that every node that runs Apigee is identically configured. But some shops that run their own Apigee have gaps in their governance and the MP nodes might be configured differently. As an example, if you are using Java8, prior to Java8_u161 (I think), and you do not have the unlimited strength crypto extension policy enabled, then the Java JVM will not be able to load RSA keys beyond 1024-bit strength. If that is happening in a few of your MP nodes, then you wouldn’t see that error in Trace, but…you might see an error like “could not find publickey…”. Scanning the system.log would provide more evidence for you. I would want to make sure that the Java configuration is exactly the same across all of these MPs.
Another possible source for different behavior from different MPs is a difference in the network configuration that you have for each MP. If you have configured the policy to retrieve public keys from a JWKS endpoint, and some of the MPs have connectivity to the JWKS endpoint, and some of them do not, then… you could see the “could not find publickey…” message. Again, Scanning the system.log would provide more evidence for you.
is there a way to bind this proxy to certain MP’s where it is working. Like I have 5 MP nodes it is working on 3 but not working on 2.
Yes, the way to do that is to use distinct environments within Apigee. Environments are just named subsets of Message Processors. But I think taking this approach is probably not the right thing to do. Better to identify the root cause of the discrepancy in behavior and rectify that problem.
So in summary, I suggest
- Verify that the keyID is the same in the success cases and in the error cases.
- If you are using an older Java callout, stop using that. Use the builtin, supported JWT policies provided by Apigee.
- If you can’t do that, or if you already ARE using the builtin policies, then … verify that the MPs are all prepared identically w.r.t. Java runtime and JCE unlimited strength crypto policy files if necessary. And also that they all have connectivity to any JWKS endpoint.
- Check the system.log for the various MPs to see if there are exceptions logged that indicate more specifically what the problem might be.
- If you have checked all of that , and still you see the discrepancy in behavior across different MPs, then I suggest contacting Apigee support for additional assistance.