GPT Builder says that the request to API was sent, but API logs show that it was not

I’m experiencing an odd issue in GPT Builder’s test mode. After making unpublished changes to instructions and API schema, GPT Builder shows it’s sending requests (“[debug] Calling HTTP endpoint”), but these aren’t logged in my API’s access logs. Despite indications from the GPT Builder (showing “[debug] Calling HTTP endpoint”) that the request was properly sent, the access log of my API does not record any incoming request, suggesting it was never received.

Additional context:

  • The API works correctly when tested independently.
  • Another GPT, when tested with a different server URL in the API schema, functions without issues.
  • The production environment, using the old API schema but the same server URL, operates smoothly.

This issue might be relevant to the test mode when changes are unpublished. I am hesitant to risk moving these changes to production due to the ongoing errors, so I do not know if those errors still going to be present if I publish those changes.

I suspect either an hallucinogenic indication from GPT Builder about sending the request through, or a mistake in how it provides task to, preventing it from reaching my API.

Has anyone experienced a similar issue or have any suggestions for resolution?

Thank you for any insights or help.

1 Like

Same here. The test mode only says “error when communicating with [blank here]”, but not mentioning to whom it’s communicating.


I just was able to reproduce this bug in another GPT that I created for test purposes. This bug happens when there are significant changes of instructions and API schema and when those changes are unpublished.
So for example I changes instructions #1 and API schema #1 to instructions #2 and API schema #2, this bug was reproduced. Then I published this changes, and the bug disappeared. Then I changed back instructions #2 and API schema #2 to instructions #1 and API schema #1, and did not publish changes and this bug came back. Now I hit the usage cap and need to wait for couple of hours to check if publishing the last changes again will treat this bug :slight_smile:

1 Like

Thanks a lot for the tip. I’ve confirmed that publishing the changes can work around this problem. At least I get a way to continue testing now. :slightly_smiling_face: Hope that OpenAI fix this problem soon.


@FlyinDeath you are welcome :slight_smile:

Continuing the story:
When I published changes again, the bug was fixed. So, according to those experiments, this bug only occurs when changes are unpublished.