Well, I have conducted numerous tests with the free version and now with the PLUS version there is a pretty serious bug that is related to a limitation. When the chat responds, for example, to a query about CODE, it always stops at some point and gets stuck. We are not talking about many lines of code, we are talking about something small of 50 or 60 lines. It always stops, and even though it indicates that it should continue, it never does so, neither where it left off nor in the context it was in. This makes its use frustrating. I thought that this would change in the paid version, but it is the same. I have proved that it is capable of responding wonderfully to code queries, but this BUG is causing me and other coders to hesitate to continue using it in the future, even for payment."
Others with a similar issue in the past have reported (here in this community) that replying with the prompt “continue” solves this problem.
You might give it a try and see if that approach works for you.
I have been conducting several hours of testing for weeks now, and the most frustrating thing is that sometimes it works perfectly fine by continuing where it left off or by picking up the response from where it stopped. However, more than 70% of the time, it gives the same response or simply picks up a thread from wherever it feels like. For text, it may be easy to pick up a thread, but for code, it is terrible! Everything must follow a certain syntax order, and honestly, after conducting thousands of tests, when it works well, I feel that it is the best tool that could have been given to coders. However, when errors occur consistently, I feel like we are being deceived by paying for something that works so poorly.ç
I have noticed this too. Specially in busy time, it does that.
The way to get around of this, is to prompt “continue” if its last message was cut in the middle. With “continue” it continues from where it left.
"Usually, it does not return to where it left off, instead, it goes to a different context or answers things that do not make sense. For instance, I was resolving a class in C# but it picked up Python code. However, I have witnessed that sometimes it does not generate such conflict and it is a wonderful tool. But, I feel that this error could be present in the trial version, and it is frustrating that we are not allowed to use it to its full extent in the paid version, especially with this dramatic error
2I’ve been testing the “continue” theme for 4 days now and the thing goes like this: it doesn’t always answer where it left off, sometimes it puts the whole code back in, other times it will answer in another context which is terribly frustrating for a coder, other times it just goes wonderful, it’s something that bothers me, it’s a wonderful tool but these 3 bugs ruin it completely.
You should use ChatGPT to help with code by breaking your code requirements into small functional code blocks or methods. In this way, you will not require ChatGPT to try to complete long blocks of code (which increases the chances of errors anyway) and it is also a basic good coding practice to have modular code with many functions and methods and short code blocks. This approach is much easier to debug as well.
In other words, the “secret sauce” of coding is short, well-defined, reusable code blocks, methods, functions and modules.
This is how I code, and I never have any problems with ChatGPT “running out of steam” when I prompt ChatGPT to write a method for me.
I am a coder for a trading platform ninjatradeing and has its ninjascript language which is properly a custom derivative of c#.an example a nt8 code has several methods but especially one that has very long lines of code where many things are calculated and that is where it is a torment to join the pieces of code that chatgpt creates or corrects because it does well but stops to show the solution and the bug is that it does not follow the ilo of what was doing an example.
sometimes it follows the prompo’s instruction to continue showing the code other times it takes everything out of context and responds with code from somewhere in hell!
Translated with DeepL Translate: The world's most accurate translator (free version)
I see similar issues.
Unfortunately some workarounds also only seem to work in some cases. For example commands line “Send the response in parts of 30 lines max and wait for me to call you for the next part” seems to do well sometimes but ends up in a reply trying to full response in one again and breaking up again. It’s a bit frustrating.
It would really be nice to have some defaults for this, for example to have responses of more than 30 lines always split up in parts. Tried doing these defaults as prompt within a thread but that doesn’t work, at least I didn’t find a way to make it work.
When it get “stuck” I just prompt it “Continue where you left of” and it does
Type “continue exactly where you left off + copy part of last line” . Then it doesnt wander off.
My experience as a novice programmer:
When things are working as expected, ChatGPT helps me learn, and solve problems quickly and clearly, reducing noise from web searches that don’t spotlight needed details as quickly as the bot does.
Limits on returned code need to be resolved for ChatGPT to reach its full potential. Many break problems occur around 100 line mark.
- “Continue”, “continue writing from given line” and other workarounds work sometimes, but most of the time I find that’s where progress on code writing breaks down. Often at this point further investment has not paid off and I’ll restart fresh with things getting confusing as we try to start/restart and course correct.
When using smaller code, bot may not keep context and create more problems. For instance, in Unity I was creating an AR project using touch interactions. When I asked for smaller code referencing what we had covered in previous code the bot took the liberty to change to mouse interactions and add in the additional libraries we had already specified as not needing to be in the code. Minor issues, but spotlights how these smaller code blocks can be more challenging to work with when compared to one script covering the needed behaviors as the output from the bot.
I totally understand part of the challenge are the limits on my programming experience… and the bot has taken me further and faster than before. I’m extremely delighted and impressed with product, just wish it could be more seamless with code generation.
Part of learning to programming, is to learn how to use the code as in smaller pieces. Objects, modules, classes etc. So take this as learning curve
The “continue” is like Russian roulette, sometimes it works well, sometimes it doesn’t. Sometimes the context goes great, sometimes it loses it and the learning curve is completely ruined.
The context sometimes goes great, other times it loses it and the learning curve is completely ruined.
I really am sure I will get the code right, but I constantly stop and ruin the wonderful experience… But what has happened with ChatGPT 4? It’s amazing, it keeps the context very well and seems to keep the conversations present. I’ve managed to get it to make me an identical dashboard from an image, but it seems to be modulating it. There are days when the limit is 100 messages, today it’s 25. Maybe today I can’t send him links to images, since he doesn’t read them anymore. I think they are doing an overhaul and will charge much more than the plus to have the ideal version for a programmer.