Agree For sure, if there is an unlimited option, some people will leverage it.
When we start building a custom GPT, we iterate over it and save it under Publish > Only me
ā Set a higher limit for private GPTs?
I guess you are able to identify which user/GPT abused the system?
ā Flag abusive users in your system
ā Set a stricter limit for abusive users
You could also check if a user is using the GPT builderās interface in Edit Mode (legitimate usage) and be stricter to other usage sources (via API, ā¦)?
In any case, we went from unlimited down to a few actions per hour (20 every 2 hours is the current limit right?) because of a few abusive users. Which lets the majority of non-abusive users developing + testing being very complicated and very long over the span of multiple days (while losing context, etc.).
ā Could set a more accessible limit for everyone without impacting your systems? 200 requests every hour would help a lot for iterating without putting a lot of pressure on your systems.
In addition, sometimes there are no response at all (see comments above) from the builder. Having proper error responses or any response at all would help reduce the number of (blind) requests from our side.
I noticed something weird as well which doesnāt help usage: sometimes the builderās requests are failing (for example: no authorization set) and itās retrying the same request multiple times (Iāve seen 15 consecutive failed requests from 1 instruction to the builder in my API logs).
Thatās the thing though, they were abusing it in the edit mode where you can test the GPT.
And I warned against abusing it six-weeks ago.
So the question becomes, how different does the use look if Iām legitimately testing, say, a coding assistant GPT versus making a coding assistant GPT so I can ātestā it for 100 uninterrupted prompts.
Because anyone with a Plus account can build GPTs, that means anyone could potentially abuse it.
Once it was public knowledge that there was no limit when ātestingā a GPT they had to shut it down because even people who had no desire to just hammer the thing for 300 messages / hour would be very tempted to hop over to the GPT builder interface to keep their work going if they hit the cap.
Thatās just human nature.
So, then, for a large percentage of Plus subscribers thereās not really any message cap anymore and thatās an untenable situation.
It was always going to have to go away.
Honestly⦠it is amazing it was ever a thing to begin with.
If I ask the gpt if the Action is known it states that I knows tha action, but isnāt allowed to call an external API. On the console I get this errors when I hit Test
Take a look at this, it was not providing a response as usual.
I asked it to give string response, than it gave me this. Apparently, it doesnāt hit the api directly but uses some internal plugin i guess.
( click on screenshot to enlarge)
Experiencing the same issue. Itās been over the past 24 hours that I slowly started noticing the performance deprecating. Now, none of my endpoints are working despite working hours before when they were first implemented in my latest GPT under construction. The action is working fine in my custom GPTs Iāve already created and published, but publishing these new ones does not resolve the issue.
Glad to see Iām not the only one experiencing this, but it does seem to indicate a greater component or cluster wide issue. Hope this issue can be passed up to the support team to communicate to the development because itās definitely a blocker on development and preventing some pretty important use case demos.
if you use one of the older (working) gpts: Do you get any arror message in the console? I think the the āNo config for gizmoā error is related to our problem.
I saw your post and double checked the console for that error, but I didnāt see it. I only saw Error with Permsions-Policy warning. Also checked the network tab, and there werenāt any failing requests.
There was a minor issue in the network tab, but analyzing the HAR file, it seemed to be from a request blocked by the client, not on the openai servers side.
So I retested (clearing the console and network tab), and the errors were not persistent in the console or in the network tab. Close inspection of the HAR looked fine as well.
So I did two final tests. 1) Refreshing the page and retrying 2) Closing out of the editor, navigating back, and retrying. Both yielded the same results. No gizmo error in console, but the issue of the blocked request in the network tab.
All this leads me to believe the gizmo error may not be directly related to the issue. If anything, it may be an intermittent side effect of the greater root cause.
The same problem. Everything that worked earlier is not working anymore. Actions did not submit proper content first (some fields were just empty). Then I tried to set up the authentication token again and now it doesnāt work at all. Totally broken.