This is the main feature i need these days given that in my experience reponses get worse these days than they used to be. I am very frustrated and dissapointed with repetitions of errors no matter what i prompt. Mostly the core reason seems to be: no memory of what happened (even a few seconds ago). No change in behavior even on negative feedback emphasizing that the results are wrong, nonsense, incomplete and so on. This state of so called “Intelligence” is IMHO far from being intelligent and unacceptable for the future. The way forward would be to accept bug reports and change requests and not rely on the “self learning” capability of the LLM which are obviously far from (at least my) user expectations.
Welcome to the community!
Apart from the in-app feedback mechanisms, you can try your luck at help.openai.com
However, it sounds like you’re experiencing some common issues when dealing with LLMs.
Memory issues
Users have reported that GPT-4o seems to be especially gold-fishy. Switching to 4-Turbo (GPT-4 in ChatGPT) might yield better results.
Negative Feedback
Negative feedback typically doesn’t really work all that well with LLMs. If an LLM can’t grok what you want it to do, try to figure out what it would need to accomplish the task. Instead of berating the model, edit your last message so it provides enough information, and regenerate the response.
Incomplete Responses
If you don’t get a “Continue” button, simply tell it to continue where it left off. The models can only generate around 4000 tokens at once, that seems to be a technical limitation.
It might be important to note that LLMs aren’t “self-leaning”. Apart from the memory features, each chat is more or less completely stateless.
I am 100% with you on this!