Apigee Edge takes care of all compression automatically. As the document you linked to says, by default it will use the compression requested by the client, but you can change that using the target endpoint configuration.
The message is uncompressed when executing the request and response flows â you never will see compressed data, and this all happens automatically.
You can test this using a test target like httpbin.org. The endpoint http://httpbin.org/gzip returns gzipped data, and you can see what kind of compression was sent to it.
Thanks @Mike Dunker. The HTTPBin link you posted seems to be âgettingâ data. Is there a way to do this when I am âpostingâ data? ie. Verify that the message I am posting to Apigee goes to the target in a compressed format.
You can test this by using another proxy as a test target. Send a request via curl to a proxy and manually set the Content-Encoding header:
curl -X POST -H "Content-Encoding: gzip" "http://{myorg}-test.apigee.net/gziptest" -d '{ "test": "a" }'
Edge will expect gzipâd data. Since curl didnât gzip the data, youâll see the following error from Edge:
{"fault":{"faultstring":"Failed to Decompress","detail":{"errorcode":"messaging.adaptors.http.DecompressionFailure"}}}
Use the Edge proxy as the target. Trace the target proxy, and if you can see the Content-Encoding header set to gzip and can see the payload, then Edge was able to uncompress the data, and the data came in correctly gzipâd.
Note that you typically donât want to call back into an Apigee proxy from another Edge proxy in the same org. There are performance implications â see the conversation about proxy chaining here: http://community.apigee.com/questions/1260/api-chaining.html . Just for testing is OK.
On a side note - be aware that cached responses are stored in the format in which they were received⌠so if your cache keys do NOT include accept-headers you may run into a case where you are serving a gzipped payload to a client that did not specifically accept such a payload. As well, if for performance reasons (large payloads/high latency clients) you want to be certain to serve a compressed response, be aware of this behavior.
Same here, my node app gzip it and looks like Apigee gzip again before it got returned to client. If I curl url | gunzip | gunzip I see what is expected.
Iâm seeing the same thing @Louis.lim5 - someone in this thread: gzip problems implies it may be a known problem with a fix coming. Donât suppose youâve found a solution already?
if the response received from System SouthBound to APIGEE is uncompressed , can we configure APIGEE to compress the response before its sent to the client?
In the case where a backend target returns compressed data and you want to un-compress it before sending to client, you can do so easily by using an Assign Message in the response flow.
Apigee documentation says , for Transport Property, Compression.Algorithm supported values are gzip,deflate and none.
Does that mean Apigee dont support br i.e. compression using Brotli algorithm?
We have a requirement where we need to support all compression algorithms (Accept-Encoding header - HTTP | MDN), so checking if we need to explicitly do something to support br .
⢠Are these limits checked before or after encoding of content? (e.g. gzip)?
⢠If backend provides an uncompressed body and client supports compression, does API Gateway compress the body in-flight? If so, is the 10MB limit enforced on the compressed or un-compressed body?
⢠If backend service provides a compressed body and appropriate Content-Encoding header, does API Gateway enforce the 10MB limit on the compressed or un-compressed body?
Any more details like relevant reference documents would be appreciate. Thank you.
Great questions, I donât know off hand. I suspect anytime the âbody.bufferâ size exceeds 10 Meg it would trigger an error.
For example, client sends zip request of 8 Meg, proxy processes the payload using policies and the compressed request is un-compressed and exceeds 10 Meg. I would think that would trigger an error.
However, if the proxy is a simple passthrough of an 8 Meg compressed request, it may not trigger the error.
The only way to know for sure would be to determine the behavior through testing.