The problem is not ChatGPT... The problem is the structure of the prompt. Does that mean users should learn the basics of writing effective prompts?

The challenge doesn’t lie with ChatGPT itself, but rather with the formulation of your prompt. The greater the precision and level of detail in your prompt, the more optimal the resulting output will be. Does this imply that individuals should acquaint themselves with the art of constructing prompts before engaging with a Language Model? (In instances where a model generates pale responses due to the prompt, there’s a risk of attributing inaccuracies to the company and flagging the models as unreliable.)



1 Like

A post was merged into an existing topic: Why do some people say Chatgpt is broken?