Status? GPT only sends 89 words before hanging

I checked the OpenAI status page for January 4th, and it is all green. However, my GPT will now only respond in 89 words or so before cutting off and hanging.

Having just signed up, I had high hopes. However, I’m getting the impression my $20 is going for beta testing and de-bugging.

Are you setting your max_tokens really small? What’s the prompt? What model? Settings?

I’m in the OpenAI UI … their website. Unlike Unreal Engine with the API connection where I set the max input and response tokens, this is just the generic online GPT creator thing on the OpenAI website. I haven’t seen the option to adjust response tokens. Additionally, it was working fine a couple hours ago with enumerated responses ranging in length from three items to 12 or 13.

It must be system problems. I cannot even log into the GPT builder now. Apparently 7 pm on Thursdays is a heavy load for OpenAI.

If truly a GPT and not a custom product: it would be using the parameters and model used within ChatGPT Plus. There is no parameter setting other than instructions.

What can be answered is the enabled GPT features: dalle, code interpreter, or external actions. Then you can determine if the output stops at a point where the AI tried to invoke one of those.

You can also see if other explore GPTs continue writing at length or if there is a present problem with the AI or your connection.

Models used on end users can be “under test” or simply change. You can make your instruction more robust to produce the output length desired if your list is just terminating with too few items.

  1. truly.
  2. right – no instructions.
  3. nothing is enabled.
  4. The entire site is down.
  5. It’s my GPT, which is currently not responding due to site traffic or something. I can connect reliably to this community forum. It’s OpenAI network issues.

To assist, diagnosis is needed

5: If the entire site is “down”, regardless of whether you are trying to use your GPT, another OpenAI GPT from the explore directory, or just plain GPT-4, then that is likely a service problem.

If you made one GPT that stops prematurely, and you had then made another GPT model that can do a more basic task of similar length, “you write 10 paragraph childrens stories”, then one can investigate what is different about the unsatisfactory GPT agent.

Also, ChatGPT has limited output length, look for a “continue” button, or type “continue”.


"(expand) - the "my GPT ideas are working for me" text

(Today’s discussion features Tim Berners-Lee, inventor of the World Wide Web, and Radia Perlman, known for her work in network engineering and invention of the Spanning Tree Protocol.)

[Tim Berners-Lee]: Well, Radia, when we’re looking at slow network responses on a website, the first thing to consider is whether the issue is on the user’s end or the server’s end. Users should start by checking their own internet connection. Simple steps like running a speed test can be quite revealing.

[Radia Perlman]: Absolutely, Tim. And it’s not just about the speed; it’s also about stability. Packet loss or fluctuating speeds can significantly impact performance. But if their connection is fine, then it’s time to look at other factors. For instance, they should try accessing the site from different devices or networks to see if the problem persists.

[Tim Berners-Lee]: That’s a great point. Cross-testing can eliminate local issues. Now, if the problem is consistent across different networks and devices, it might be on the website’s end. It’s important for users to understand that web performance can be affected by various factors like server load, content delivery networks, and even the website’s code efficiency.

[Radia Perlman]: Exactly, Tim. And let’s not forget about geographic location. Users far from the server might experience slower responses. Tools like traceroute can help identify if there are any network hops that are causing delays. But, if a user has ruled out all these factors and the issue persists, it might be time to contact the website’s support team.

[Tim Berners-Lee]: Indeed, Radia. Website support can provide insights into server-side issues that users can’t diagnose themselves. They might be undergoing maintenance, facing unexpected traffic surges, or dealing with backend problems. It’s a complex ecosystem, and sometimes the answer lies with those who maintain it.

(Expert characters are selected by AI only to imbue answering skills in field, without likeness rights. AI responses neither represent actual expert opinions nor positions.)

I’ve tested other GPTs. … all of them are hanging in mid response. It’ll probably relent soon enough.

2 Likes