Apigee returning 502 Bad Gateway – TARGET_READ_UNEXPECTED_EOF for backend taking ~4 minutes (PSC + Internal LB)

I am facing a persistent 502 Bad Gateway error in Apigee with the fault:

Unexpected EOF at target
errorcode: messaging.adaptors.http.flow.UnexpectedEOFAtTarget
reason: TARGET_READ_UNEXPECTED_EOF

This happens when calling a backend API that takes ~4 minutes to respond.

The failure is consistent and occurs at approximately ~60 seconds, even though all timeouts have been increased at the Apigee TargetEndpoint and the backend Load Balancer.


Architecture

  • Apigee: (X)

  • Connectivity: Apigee → PSC (Private Service Connect)Internal Application Load Balancer

  • Backend: Rest API

  • Protocol: HTTPS

  • Method: GET

  • Backend response time: ~4 to 5 minutes

  • LB backend-service timeout: 1800000 ms (30 minutes)


Observed Behavior

  • Request fails with 502 Bad Gateway

  • Apigee trace shows:

    TARGET_READ_UNEXPECTED_EOF
    Unexpected EOF at target
    
    
  • Postman shows:

    • Status: 502

    • Failure time: ~60 seconds

  • Backend does not return any headers/body before the connection is closed


TargetEndpoint Configuration

<TargetEndpoint name="default">
  <HTTPTargetConnection>
    <URL>{targeturl}</URL>
    <Properties>
  
      <Property name="chunked.enabled">true</Property>
      <Property name="request.streaming.enabled">true</Property>
      <Property name="response.streaming.enabled">true</Property>
      <Property name="connect.timeout.millis">1800000</Property>
      <Property name="io.timeout.millis">1800000</Property>
      <Property name="api.timeout">1800000</Property>
      <Property name="keepalive.timeout.millis">30000</Property>
    </Properties>
    <SSLInfo>
      <Enabled>true</Enabled>
      <IgnoreValidationErrors>true</IgnoreValidationErrors>
    </SSLInfo>
  </HTTPTargetConnection>
</TargetEndpoint>


What I Have Already Tried

  • Increased Apigee TargetEndpoint timeouts

  • Increased Internal LB backend-service timeout to 30 minutes

  • Reduced keepalive.timeout.millis to avoid stale connection reuse

  • Enabled request/response streaming

  • Verified HTTPS hostname (not IP)

  • Tested via Postman and browser (same result)

Despite all this, the request always fails around ~60 seconds with EOF.

Hello @Asjid_Tahir ,

What lives in front of your Apigee X instance being proxied through your Postman client (ie: is it a GC XLB, etc)? Have you verified the timeout settings of that specific load balancing entity (which you should be able to increase as seen here: https://docs.cloud.google.com/sdk/gcloud/reference/compute/backend-services/update#--timeout)?

Additionally, have you verified via debug logging where the connection might be cut off - for example if you call into the Apigee host directly does the call work to expectation?

Best
Matt

1 Like

Hi @hartmann ,

Thanks for the response.

Here are the details based on further investigation and Apigee debug traces:

Architecture

  • Apigee X

  • Client → Apigee X → PSC → Regional Internal Application Load Balancer → RestAPI

  • Backend uses a PSC NEG

  • Protocol: HTTPS

ILB configuration

  • Backend service timeout: 180,000 seconds

  • HTTP keepalive timeout: 610 seconds

  • No proxy/LB timeout limiting the request

Apigee Debug findings

  • Failure occurs consistently at ~30 seconds

  • Error happens in TARGET_REQ_FLOW

  • Apigee never receives response headers

  • Fault returned:

    Target_Read_UnExpected_EOF

    This indicates the backend closed the connection abruptly before sending a response.

    Regards,
    Asjid Tahir

Hello @Asjid_Tahir Thank you for your reply - is there nothing in between your client and Apigee (ie: an XLB, ILB, etc)? I am wondering what that specific entity times out, which causes downstream failures (see more on northbound routing here for reference: Apigee networking options  |  Google Cloud Documentation)

Hello @hartmann ,

The TARGET_READ_UNEXPECTED_EOF error was caused by a backend Load Balancer (RestAPI) closing the connection before the backend completed processing. Apigee propagated this as a 502. Increasing the backend LB timeout resolved the issue and now everything is perfect.