How to make ChatGPT file a bug report against itself?

This is the main feature i need these days given that in my experience reponses get worse these days than they used to be. I am very frustrated and dissapointed with repetitions of errors no matter what i prompt. Mostly the core reason seems to be: no memory of what happened (even a few seconds ago). No change in behavior even on negative feedback emphasizing that the results are wrong, nonsense, incomplete and so on. This state of so called “Intelligence” is IMHO far from being intelligent and unacceptable for the future. The way forward would be to accept bug reports and change requests and not rely on the “self learning” capability of the LLM which are obviously far from (at least my) user expectations.

1 Like

Welcome to the community!

Apart from the in-app feedback mechanisms, you can try your luck at help.openai.com

However, it sounds like you’re experiencing some common issues when dealing with LLMs.

Memory issues

Users have reported that GPT-4o seems to be especially gold-fishy. Switching to 4-Turbo (GPT-4 in ChatGPT) might yield better results.

Negative Feedback

Negative feedback typically doesn’t really work all that well with LLMs. If an LLM can’t grok what you want it to do, try to figure out what it would need to accomplish the task. Instead of berating the model, edit your last message so it provides enough information, and regenerate the response.

Incomplete Responses

If you don’t get a “Continue” button, simply tell it to continue where it left off. The models can only generate around 4000 tokens at once, that seems to be a technical limitation.

It might be important to note that LLMs aren’t “self-leaning”. Apart from the memory features, each chat is more or less completely stateless.

2 Likes

I am 100% with you on this!

I’ve also found that, for being the best-performing free version (4o, although it’s not o1), it sometimes behaves like 4o-mini or 3.5. For the intensive and fast use I’ve been giving it, what I would do is switch between conversations and push it to continue. Sometimes it worked, which is great, but most of the time it goes back to asking what’s mentioned in the message above. So, sometimes it works, sometimes it doesn’t. In other words, it’s unreliable.

I’ve usually gotten frustrated with 4o-mini, which I tend to ‘test’ with a lot of restrictions, loops, and error checks to force it to continue or elaborate something that 4o would understand more quickly. This habit makes me quickly identify when the model that should be performing well starts acting more like Claude Haiku compared to Sonnet.

Buenos días, no sé si ha sido casualidad, pero desde que me suscribí a la versión de pago, las respuestas me salen cortadas. a medias puedo solucionar parcialmente pidiendo que me envíe la respuesta en un word, pero realmente es un trastorno, porque me obliga a descargar un gran volumen de documentos. No funciona bien ni siquiera para una respuesta corta. Tampoco sigue correctamente las instrucciones que le doy. Cuando comencé a usarlo, funcionaba muy bien, con esto quiero decir, que no puede tratarse de que yo no sepa darle instrucciones.
Espero que puedan ayudarme.
Gracias