Issues with API actions on Custom GPTs since last week

I just realised that the actions in my Custom GPT are no longer working as expected and the behaviour of ChatGPT seems to have changed.

  • The action to call external APIs no longer get called given the same exact prompt that used to work a few weeks ago.
  • The logs (e.g. exact request and endpoint) for the custom actions are no longer visible, making it very hard to troubleshoot to know what went wrong.
  • Sometimes, the Custom GPT will not even start.

Has anyone been experiencing any of these issues? If so, do you know if it’s a permanent change to how Custom GPTs work or just something intermittent?

2 Likes

I’m having the same issue.

My Custom GPT was simply calling my external API, defined in an action with OpenAPI specs. I’m using a custom authorization header x-my-custom-key.

Yesterday (monday 22) was working out of the box, no problem (except from the very limited usage). Same GPT, no edits, and today it fails giving no reason and no answer.

Hard to debug because it’s giving no response at all. However, I can see from my external API logs that the GPT is no longer providing the authorization header properly. Also, you can ask it a curl example of the query performed, which helps debugging the missing parts.

Thank you for that last line. Good tip, I will try this, why did I never think of that! I always create curl and then havi it create the openapi schema, but never thought to do this. I will use this tip.

FYI I tried again this morning without any change, and now it works well :woman_shrugging:t2:
The lack of Changelog, debug logs or any information whatsoever is a bummer for developing a custom GPT…

3 Likes

It is extremely frustrating experience.

I have my GET action working properly most of the time, however POST actions(2 of them) I have hard time executing! Will try to ask it for a curl equivalent when debugging, but now I have 40 message limit to GPT-4, which is even more frustrating for developing GPTs.

@logankilpatrick can we ask for improving the experience for developing them?

1 Like

I have been trying to create a simple GPT action, a get request that fetches a string. I have created the schema, i believe it is right ( no errors shown ).

When i try to test the action by using “TEST” button provided, the GPTs name appears in the preview side but nothing else.
The API is working (tested on postman).

I couldn’t find any single reference about GPT doing nothing, not even giving error. Does this happen to anyone else as well ?

openapi: 3.0.0
info:
  title: AL API
  description: API for AL services.
  version: 1.0.0
servers:
  - url: Myapi
    description:my API server
paths:
  /getg:
    get:
      operationId: getStringResponse
      summary: Retrieves a string response
      responses:
        '200':
          description: A simple string response
          content:
            text/plain:
              schema:
                type: string

gptactionissue picture

I have similar issue in the editor, save the GPT and try from the saved version.

:heavy_plus_sign: :one: extremely frustrating experience
I code 10mn trying to debug blank errors and then I get a usage cap, come back in 2h…
It’s the worst dev experience platform :man_with_probing_cane::man_with_probing_cane:

I understand the usage cap in general, but while developing and testing the GPT privately, it would be great to have a higher usage.

3 Likes

Yes it happens a lot. Ask it what’s wrong, what’s the request it tried to execute (or a curl sample if it doesn’t want to). It’s weird that there is no error message displayed by default.

I don’t know why but sometimes it’s using completely made up endpoints or the wrong domain (despite the OpenAPI specs…).

@al007 Ask the following: Dump the last request you attempted in a curl format
And from there you might find why it failed.

1 Like

i asked to provide diagnosis on why it did not work

it responds with:

It appears that there was an issue with retrieving data from the api.mywebsite.com API using the getSampleString operation.

or

It appears there was an issue with retrieving data using the api.mywebsite.com API’s getSampleString operation. The tool did not return any response.

it also provides these:

  1. API Server Issue: The server hosting the API might be down or experiencing technical difficulties, preventing it from responding to requests.
    ( server is working, checked on postman)
  2. Network Connectivity Problems: There could be network connectivity issues either on the server’s end or on our end, hindering communication with the API. ( Internet is fine)
  3. API Changes or Deprecation: The API endpoint or the specific operation (getSampleString) might have been changed, deprecated, or removed since the last update of my training data.
  4. Request Formatting Issue: There could be an issue with how the request was formatted or sent, which could prevent the API from processing it correctly.
  5. Rate Limiting or Access Control: Sometimes APIs enforce rate limits or require specific access credentials. If these limits are exceeded or if the correct credentials are not provided, the API may reject requests.
  6. Internal Processing Error: There might be an internal error in processing the request within the API, which could be temporary or require intervention from the API’s maintainers.

for more context

node.js server

router.get(‘/’, (req, res) => {
res.send(‘MOSH’);
});

postman get request ( which works as intended ) is
[Preformatted text](https://api.mywebsite.com/getg)

can you explain how exactly or what i need to look in the curl command. Thanks btw

Verify that the domain, path and parameters used are correct. For whatever reason, sometimes it’s not following the specifications and it randomly change them:

Here is an example of output from my GPT:

$ Dump the last request you attempted in a curl format
curl -X POST https://my.domain.co/forwardGeocoding \
     -H 'Content-Type: application/json' \
     -d '{"query":"hello"}'

In my case, I’ve noticed that:

  • It changed the domain from my.domain.co to my.domain.io
  • The path from /search to /forwardGeocoding (which doesn’t exists, it’s the operationId)
  • Sometimes it’s trying random parameters in the body that doesn’t exists in the schema (and so my API rejects it) :woman_shrugging:t2:

Enjoy your coffee break. Hope that helps.

2 Likes

Thanks friend, this is exactly what it is doing, changing the api. Atleast now i know the issue, it will be easier to debug

1 Like

Same issues here. Sometimes the action fails even if the API returns 200 OK, but it doesn’t fail every time. It failed 50%-50%. It’s weird and unpredictable.

The same applies to me; none of my APIs have been working since Tuesday, meaning I don’t get any debug information in Preview when I make the test call. The same behavior occurs when I execute the example APIs, for example, Weather (JSON). Do you get any debug information when you make the test call for Weather (JSON)?

If you are using the weather json from example given it won’t work anyway, i believe the api don’t work, check on postman.

On the other hand, i never get any debug info, even on working api’s

Great to know that everyone is facing the same issues! It has been tough troubleshooting and expending tokens to debug. Is there a support channel we can contact OpenAI at?

3 Likes

The weather (JSON) endpoint does not exist, but I am using the schema to check the debug functionality. Last week, I was still getting the normal debug info. Now, there is no response. Not with my own endpoints and not with the example test calls. Something has changed and so far, I have not found any information about it.

1 Like

if you enter the full path the the url itself, path=‘/’ and operationId=pathName

so example
url- https://mywebsite.oom/getString

operationId- getString
path- /

then the asking for curl command will give correct api , but the api itself wasn’t hit when i used “TEST”.
In my case i just get whitespace/ empty screen. No debug info @rettifan

There used to be no usage limit when building and testing GPTs, but then some people decided to abuse that to circumvent the limits of their Plus account.

And that’s why we can’t have nice things.

If you know of a way to reliably differentiate between legitimate testing of a GPT and using the GPT under the guise of testing, that’s would be very interesting.

But, as things stand, this is where we are.

2 Likes