I have found that when I am working with chatGPT things have changed in terms of the way context works.
A normal conversation leaves out various things, like if I have gpt write a program then I may ask about it without specifically pointing out that it was that program + the 10+ other aspects of context (that is what a history is for right?).
chatGPT acts very “dumb” or is missing brain cells it seems lately. It will just start redoing the work from scratch. Then I stop it and say “no use our previous conversation discussion” and it does it with the right context.
So has chatGPT been turned into some sort of “quick one shot” more than a chatGPT it is a intstructGPT and that is really not very useful.
I have to tell GPT to remember what was just output, it’s not context size or anything other than it not looking or ignoring my previous disussion thinking we are ‘starting again fresh’.
Seems like a huge bug, not sure why it isn’t noticed that this probably is the cause for all the “GPT doesn’t code well anymore”. Yes, because it acts like it constantly gets a labotomy unless you remind it you have been speaking previously.