ChatGPT Plus writes code and stops near to the end - Node.js

“Brain Storm” JavaScript code
I have a subscription and use ChatGPT Plus to “brain Storm” JavaScript code for a mix of projects.
The code mainly runs using Node.JS.
I found that giving ChatGPT a very defined task the code results are very good.
I keep my requirements to small functions so that the resultant code content is less than 80 lines.

“Just Stops”
However, many times when the code starts to get near the end it “Just Stops” many times I can code it, but its quite a let down of such a smart system.

chatGPT retry bug
If you ask chatGPT to continue from the “for loop x” for example it will re-write the code again and hang in about the same place. So you have to teach it, Well give it some rules,

Work around and other bugs
My fix is to let it code till it stops and hangs and then type
“Your code is not finishing to the end” can you code in sections? … (GPT) Sure,
It will do the same bug and re-type the code again from the start and hang, (Showing its rule base not AI issue)

Type "No failed!, type in sections as code from the “for loop x” keeping the same context till the end,
Very important to add “Sections”, as “code” as GPT has another bug as it will forget to style as code and give you code as chat text.

The other issue is GPT needs to follow the algorithm its creating from the beginning to compute the full context. So typing “keeping the same context till the end” means that value X may not become Y.

Summery
I have read there is a line limit, but some times it hangs after just a few lines, Also I have paid chatPlus so its faster with less restriction. I’m still very impressed with my “Brain Storms” with chatGPT you just have to work with it.

I have found that when its made a coding mistake and you suggest a different approach it builds on it in ways you might have not used. I have been very impressed with using MySQL and Asyinc in Node.js which has some tricky code issues that it does better than us humans!

Also I know this sounds strange, I find my self writing, Good, great well done, just in case the AI team are going to use positive reinforcement in the future.

There is another thread about this, so far the answer is to wait 15 minutes for the reply to finish(not 100%) or simply type continue and gpt will continue from exactly where it left off. Hopefully it’s a known issue by the devs by now.

1 Like

No the bug, will keep writing from the beginning again if you type continue. Also waiting rarely gets completed. But thanks for your suggestions.

Interesting, haven’t heard of that one yet. Did you try continue from “this line here”? That’s how I used to do it before I got lazy and typed continue.

Yes, continue is a gamble, I did find better results if you used continue from a start of a function or sample of code before the hang. but very important to type as “same context” as I found GPT would forget the width was w and height was h and continue with the context changed.

There are so many threads about this, and it’s the same answer.

If the token length is reached mid-way through an output it will be abruptly cut. It’s a limitation of the model itself. If you have reached the limit, your conversation will be be truncated - which can include any comments or code you started with.

The solution is to start a new conversation, send the important chunks of your code and explain the architecture, or use a version of GPT-4 with a higher token limit.

I’ve also read here it will happen if the servers become overloaded, even without the token limit.

For sure. Although I have never experienced it as so without something such as a “Network Error” being thrown. For now it’d be fair to focus on the most common and proven cause.

I have read about the token limit, and keep my code tasks to simple sections. Although tokens all make sense its not easy to judge what your using up. I also use chatGPT-4 but its so slow, Its all rules based so the AI part is a bit of a deal breaker, but still very impressed with its data access speed. I’ve stopped using Google search, and code help sites.

Agreed. It’s difficult to know what’s been truncated to maintain the token length.
It’s a bit cheeky & risky, but I also tend to ask it for API references rather than look it up myself. Probably not a good practice and I’ve already dealt with some deprecated code, but what can I say

1 Like

What are tokens and how to count them?

https://platform.openai.com/tokenizer