As @Foxabilo stated, there’s simply far too many tokens there—until I can get to my PC and verify, I’ll trust the 30k estimate provided.
So, I’m absolutely not surprised the model went sideways on you, I observed this same behavior several months ago working on a coding project.
The problem presents itself when some critical piece of context gets dropped and the model needs to fill in the gaps. If it doesn’t fill them in nearly exactly the same as they were originally, you get a cascade of problems as the model makes increasingly desperate attempts to patch what it did before with what it’s doing now.
As it needlessly refactors more and more code you get a situation where other critical pieces start to fall out of context.
The solution, as has already been given to you, is to occasionally collect all of the “good” parts together and start a new chat with only those pieces in context and start from there.
Two other pieces of advice I can give you are these,
- Use the “Edit prompt” button and “Save & Submit” when you need to get the model to do something slightly different. This takes the wrong response out of context so it doesn’t take up valuable space and it won’t be referenced again.
- Run incidental questions in a separate chat. Things like:
A. question: when a new user installs this module, will these updates be applied automatically in the new install?
B. so, I need to modify the existing install file with the changes and remove the update functions for any new installs?
C. In drupal, how do I get the IP address of the current user?
D. what was the command to update composer?
Can all be run in a separate chat instance, further cleaning up your context window.
In short, it’s generally preferred to keep only the most salient information in context. If the model gives you a response that falls short of your requirements or expectations—while you can certainly ask it to fix the response in the next prompt—it’s better to figure out how to modify the initial prompt to directly spawn the “fixed” response. If you have questions adjacent to the work you are doing, it’s better to ask them separately, then include your new-found knowledge in the prompt to get the desired result.
It is a little more work upfront, but much less in the long run.
Also, this is not a new problem or indicative of any decline in the quality of the models. This has been a recurring issue with all GPT models from their inception.
One final note, things looked like they were going more-or-less fine until the model was switched midstream, I’m not sure if there has been much published on how models handle continuing conversations started in other models. The immediate loss of half your context window certainly could not have helped matters.