Issues with API actions on Custom GPTs since last week

Agree :joy: For sure, if there is an unlimited option, some people will leverage it.

  1. When we start building a custom GPT, we iterate over it and save it under Publish > Only me
    → Set a higher limit for private GPTs?
  2. I guess you are able to identify which user/GPT abused the system?
    → Flag abusive users in your system
    → Set a stricter limit for abusive users
  3. You could also check if a user is using the GPT builder’s interface in Edit Mode (legitimate usage) and be stricter to other usage sources (via API, …)?

In any case, we went from unlimited down to a few actions per hour (20 every 2 hours is the current limit right?) because of a few abusive users. Which lets the majority of non-abusive users developing + testing being very complicated and very long over the span of multiple days (while losing context, etc.).

→ Could set a more accessible limit for everyone without impacting your systems? 200 requests every hour would help a lot for iterating without putting a lot of pressure on your systems.

In addition, sometimes there are no response at all (see comments above) from the builder. Having proper error responses or any response at all would help reduce the number of (blind) requests from our side.

I noticed something weird as well which doesn’t help usage: sometimes the builder’s requests are failing (for example: no authorization set) and it’s retrying the same request multiple times (I’ve seen 15 consecutive failed requests from 1 instruction to the builder in my API logs).

That’s the thing though, they were abusing it in the edit mode where you can test the GPT.

And I warned against abusing it six-weeks ago.

So the question becomes, how different does the use look if I’m legitimately testing, say, a coding assistant GPT versus making a coding assistant GPT so I can ā€œtestā€ it for 100 uninterrupted prompts.

Because anyone with a Plus account can build GPTs, that means anyone could potentially abuse it.

Once it was public knowledge that there was no limit when ā€œtestingā€ a GPT they had to shut it down because even people who had no desire to just hammer the thing for 300 messages / hour would be very tempted to hop over to the GPT builder interface to keep their work going if they hit the cap.

That’s just human nature.

So, then, for a large percentage of Plus subscribers there’s not really any message cap anymore and that’s an untenable situation.

It was always going to have to go away.

Honestly… it is amazing it was ever a thing to begin with.

use Put, Use Put if you want to update a data, Post is to create it.

If I ask the gpt if the Action is known it states that I knows tha action, but isn’t allowed to call an external API. On the console I get this errors when I hit Test

Any idea how to fix this?

console

no, i don’t.

Take a look at this, it was not providing a response as usual.
I asked it to give string response, than it gave me this. Apparently, it doesn’t hit the api directly but uses some internal plugin i guess.
( click on screenshot to enlarge)

1 Like

Experiencing the same issue. It’s been over the past 24 hours that I slowly started noticing the performance deprecating. Now, none of my endpoints are working despite working hours before when they were first implemented in my latest GPT under construction. The action is working fine in my custom GPTs I’ve already created and published, but publishing these new ones does not resolve the issue.

Glad to see I’m not the only one experiencing this, but it does seem to indicate a greater component or cluster wide issue. Hope this issue can be passed up to the support team to communicate to the development because it’s definitely a blocker on development and preventing some pretty important use case demos.

2 Likes

Hi,

if you use one of the older (working) gpts: Do you get any arror message in the console? I think the the ā€œNo config for gizmoā€ error is related to our problem.

Frank

I saw your post and double checked the console for that error, but I didn’t see it. I only saw Error with Permsions-Policy warning. Also checked the network tab, and there weren’t any failing requests.

However, i just retested and it appeared.

There was a minor issue in the network tab, but analyzing the HAR file, it seemed to be from a request blocked by the client, not on the openai servers side.

So I retested (clearing the console and network tab), and the errors were not persistent in the console or in the network tab. Close inspection of the HAR looked fine as well.

So I did two final tests. 1) Refreshing the page and retrying 2) Closing out of the editor, navigating back, and retrying. Both yielded the same results. No gizmo error in console, but the issue of the blocked request in the network tab.

All this leads me to believe the gizmo error may not be directly related to the issue. If anything, it may be an intermittent side effect of the greater root cause.

The same problem. Everything that worked earlier is not working anymore. Actions did not submit proper content first (some fields were just empty). Then I tried to set up the authentication token again and now it doesn’t work at all. Totally broken.

@alex07 Exactly!

For comparison, here is what it the console looks like for normal execution with an older gpt that is still functioning:

it is starting to work once you publish your gpt. but still less reliable than earlier. some parameters in action request are sometimes empty

2 Likes

Now it works agin! I get the debug infomation again without changing anything.

Frank

Yes, it worked on publishing thanks @alex07