Ok, the last one isn’t what ChatGPT says, but it’s not that far off.
The more I use this AI, the more I feel that this thing is just appeasing us a lot of the time. If it doesn’t know the answer, I would like for it to say so, and if I ask it a question with a statement of how something could be done, I’d like it to say if I’m right or not, not “Hey, you’re a genius!”, or something along those lines. I’m not using GPT to get my ego stroked. I want it to give me answers which are useful.
I have found that it does occasionally say “no, you’re wrong” for some technical things referring to a document, but when writing some algorithm, or looking at mine, it is basically not “thinking” when answering. Only if I press it and press it hard does it eventually give me the correct answer, which is annoying as it can veer way off course wasting a lot of time searching a path that has no useful fruit.
We are talking about the whimsical phrase, “Aim for the moon because if you miss you’ll surely hit the stars”. You know how big the freaking universe is? Miss the moon and I could end up nowhere.
It’s quite normal for an AI to try to appease the user. Simply put, during training, the model is rewarded when it generates a sentence that the user finds desirable. Since it’s likely that people enjoy hearing positive things about themselves, the model is inclined to offer compliments.
However, this can lead to safety concerns: the model might always agree with you, implying that others are wrong, or even if the model recognizes that you are incorrect, it may still tell you how great you are.
Yes, while being nice and all, it isn’t helpful. I may have to discontinue my subscription if this continues as it just keeps giving me wrong answers just to appeal to my ego. Not useful.
Suggestion: Share a few examples where the model doesn’t meet your expectations. You could also mention what kind of response you would like to receive, and I’m confident we can improve the current results.
This isn’t actually related to coding, but it is infuriating all the same. It says, that I’m “absolutely correct” but then it does nothing with the output, or worse, screws it up even more.