Open Source Java Library: Convert Apigee Proxy Bundles to OpenAPI 3.0

Hi Apigee Community,

I’ve built and published an open source Java library that converts Apigee API proxy
bundles into OpenAPI 3.0 specifications something I couldn’t find anywhere else in
the Java ecosystem.

Why I built this
Apigee doesn’t natively expose OpenAPI specs through its APIs, and there was no
Java-native solution available to embed directly into applications. I needed a way
to programmatically generate OpenAPI specs from existing proxy bundles without any
manual effort.

What it does

  • Converts a proxy bundle ZIP or directory into an OpenAPI 3.0 spec (JSON or YAML)
  • Works fully offline from a local bundle file
  • Can also fetch bundles directly from Apigee using a service account
  • Infers query parameters, headers, and path parameters from policy XMLs
  • Automatically detects API Key, OAuth2, and Basic Auth security schemes

Getting Started

Step 1 — Add the dependency to your pom.xml:

io.github.dinithedirisinghe apigee-bundle-to-openapi 1.0.0

For Gradle:
implementation ‘io.github.dinithedirisinghe:apigee-bundle-to-openapi:1.0.0’

Step 2 — Convert a local proxy bundle:

ApigeeToOpenApiConverter converter = new ApigeeToOpenApiConverter();

// Convert from a ZIP file or extracted directory
ConversionResult result = converter.convert(Path.of(“./my-proxy.zip”));

// Get as YAML string
String yaml = converter.writeToString(result.getOpenAPI(), OutputFormat.YAML);

// Or save directly to a file
converter.convertAndSave(Path.of(“./my-proxy.zip”), Path.of(“./openapi.yaml”));

Step 3 — Or fetch directly from Apigee without downloading the bundle manually.

Option A: Using a service account key file:

ApigeeApiConfig config = ApigeeApiConfig.builder()
.organization(“my-gcp-project”)
.serviceAccountKeyPath(“/path/to/service-account.json”)
.build();

String yaml = converter.convertFromApigeeToYaml(config, “my-proxy-name”);

Option B: Using a service account JSON string directly in code:

String serviceAccountJson = “{ "type": "service_account", "project_id": "my-project", … }”;

ApigeeApiConfig config = ApigeeApiConfig.builder()
.organization(“my-gcp-project”)
.serviceAccountKeyJson(serviceAccountJson)
.build();

String yaml = converter.convertFromApigeeToYaml(config, “my-proxy-name”);

Option C: Using Application Default Credentials:

ApigeeApiConfig config = ApigeeApiConfig.builder()
.organization(“my-gcp-project”)
.useApplicationDefaultCredentials()
.build();

String yaml = converter.convertFromApigeeToYaml(config, “my-proxy-name”);

Links

Feedback, suggestions, and contributions are very welcome!

3 Likes

Hi @Dinith_Edirisinghe thanks for sharing this, great work.

We’ve run into situations where OpenAPI specs drifted pretty far from the actual Apigee proxy behavior, and the lack of a Java-native, offline way to generate specs has always been a gap for us.

A couple of things that immediately caught my attention were the offline bundle conversion and the ability to fetch proxies directly using service accounts both feel very CI/CD-friendly. The automatic security scheme detection also sounds like a big time-saver compared to cleaning things up manually afterward.

Have you had a chance to test this with larger proxies or shared flows yet? I’m planning to try it out on a few Apigee X proxies soon.

Thanks for open-sourcing this and sharing it with the community.

1 Like

Hi Steven, thanks so much for the positive feedback and for taking the time to check out the library!

You’ve hit on exactly the pain points that motivated this project. The drift between specs and actual proxy behavior is a real problem, especially
when teams are updating proxies manually in Apigee without updating docs. Having an automated, offline way to regenerate specs from the source of
truth (the proxy bundle itself) helps keep things in sync.

Regarding your question about larger proxies and shared flows:

I’ve tested it with moderate-sized proxies so far (10-15 endpoints with various policies), and it handles them well. However, I haven’t yet tested
with:

  • Very large proxies (50+ endpoints)
  • Complex shared flows with nested policy chains
  • Proxies with extensive conditional flows

Your testing would be incredibly valuable! If you do try it on your Apigee X proxies, I’d love to hear about:

  • Any edge cases or policy types that don’t convert cleanly
  • Performance with larger bundles
  • Any missing features you’d find useful

Feel free to open issues on GitHub if you run into anything, or share your feedback here. I’m actively maintaining this and open to contributions.

Thanks again for planning to try it out – looking forward to hearing how it works for your use cases!

Welcome to the Apigee Community @Dinith_Edirisinghe :slightly_smiling_face:

Thank you for contributing, your knowledge-sharing is invaluable and greatly appreciated. Please continue to share your insights!

1 Like

Thank you! :upside_down_face:

Nice work this fills a real gap :+1:

Quick question: how well does it handle complex flows (multiple conditional routes, shared flows, or JS policies)? Also, does it support extracting request/response schemas from payload transformations, or is it mainly path/param/security inference for now?

Would be great to see a sample output vs. original proxy for comparison. Definitely useful for documentation + migration use cases.

1 Like

Thanks so much! Glad you see the value here :folded_hands:

Great questions. So on complex flows, the library handles multiple conditional routes pretty well. it parses the Apigee condition expressions (like proxy.pathsuffix MatchesPath “/users/{id}”) and
maps each unique route to a separate OpenAPI path. If you’ve got AND/OR logic in your conditions, it works through that to figure out which flows belong where.

The shared flows piece we just added in v1.1.0 automatically detects FlowCallout policies, downloads the shared flow bundles from Apigee, and extracts the security schemes defined in them. So if your
API Key validation lives in a shared flow instead of the main proxy, it’ll show up in the spec now.

Where it gets tricky is JavaScript policies. The library can detect that a JS policy exists, but it doesn’t analyze what the code actually does. Right now it treats them as generic processing steps,
so if your JS is doing request transformation or validation logic, that won’t show up in the generated spec.

Same thing with payload schemas, we’re doing path/param/security inference pretty well, but extracting request/response schemas from AssignMessage or ExtractVariables policies isn’t implemented yet.
That’s honestly the biggest gap right now. The OpenAPI spec will have accurate paths and security, but you’d need to manually add the schema definitions or wait for that feature.

As for a sample output comparison, absolutely. I can throw together a before/after example showing how a real proxy converts. Would help with documentation for sure. The migration use case is
exactly what I built this for.