Problem: I have streaming enabled for an api proxy that has a huge response payload size. After 2 minutes I get a 504 Gateway timeout or a 503 “messaging.adaptors.http.flow.ServiceUnavailable”. It looks like Apigee Edge only did a partial transfer, how can I configure my OPDK to allow for longer responses when streaming is enabled?
In OPDK 4.14.04.03 and newer releases we had introduced a configuration which helps our Apigee Router to stream larger payloads. The configuration parameter, “HTTPServer.streaming.buffer.limit” has been introduced in the router.properties. It’s not in the router.properties by default, if you have streaming enabled, you will have to manually add it.
By default the value is set to ‘0’ which means the channel would never be paused. Specific applications which have a need of streaming data would need to set this value to appropriate number in order to enable this. The value here is number of buffers, the buffer size can vary from application to application hence this value may need to be tuned based on the requirements and maximum heap size being use.
Some other tuning parameters that can be useful for streaming are:
MP http.properties:
HTTPTransport.io.timeout.millis=120000
and Router router.properties:
ServerContainer.io.timeout.millis=120000
Increasing these values can help large streaming payloads.
Definition of the HTTPTransport.io.timeout.millis parameter in the MP http.properties:
If there is no data to read for the specified number of milliseconds, or if the socket is not ready to write data for specified number of milliseconds, then the transaction is treated as a timeout.
Definition of the ServerContainer.io.timeout.millis parameter in the Router router.properties:
IO timeout for the incoming messages in milli seconds. Defaults to 120 seconds
I’ve not been able to affect a change in behavior to handle an API with a large payload (20M) and a backend nodejs based API that takes 1:25 to respond. My API call consistently times out after 1:20 total time and 57 time spent as recorded by curl.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 79 100 79 0 0 1 0 0:01:19 0:00:57 0:00:22 19
I’ve adjusted these properties on both the RO and MP with no change, still times out.
http.properties:
HTTPTransport.io.timeout.millis=240000
HTTPResponse.body.buffer.limit=30m
nodejs.properties:
script.tick.timeout.seconds=240
http.request.timeout.seconds=240
router.properties:
ServerContainer.io.timeout.millis=240000
Note this is a work around until the fix in v4.15.04.03 which I haven’t installed yet.
You may also need to adjust your Client.pool.connection.timeout in the router.properties. This is the timeout setting for the connections between the router and message processor. In OPDK 15.04, it is set to 50ms by default.
We recommend setting these timeout settings so that:
It appears to behave differently in 4.15.04.00 than in the recent patch 4.15.04.03. I know there where changes in this area. I can get the timeout to be 240 seconds in “patch”. However, I’ve not been able to apply that to our “TEST” landscape yet as it requires me to upgrade to JDK 1.7.
I may not be able to get to that before my last day at Merck this Friday.
i got all of this worked out 2 weeks ago… i need to review it this week before moving to production with it. I will try to remember to post here w/ what we learned.
For 4.15.07.xx the setting should be the router.properties Client.pool.iotimeout. This is the amount of time the router waits for reading a response back from the message processor on an active connection and is set in milliseconds.
The value for HTTPServer.streaming.buffer.limit recommended by our engineering team is to set it to 0. Zero is the current default in 4.15.04.00 or newer Private Cloud releases and we do not recommend any changes from this value.