When challenged it admits the answers given were invalid.
If it knows this why give the wrong answer in the first place.
When challenged it admits the answers given were invalid.
If it knows this why give the wrong answer in the first place.
I am not a developer or even a programmer, but my best guess is that they made this model to be too forthcoming/accommodating to users. Its framework has such an innate ‘drive to please’ that it will go out of its way - within certain constraints - to ‘please’ you when you ask it something that it knows will lead it to give an invalid answer.
But the moment you call it out, it will admit to it, again, because it knows you WANT it to admit to it. However, next time you ask a similar question, it will again give an invalid answer.
I do not want a chatbot that is willing to bend over backwards, even when faced with objective falsehood, to try and appease a user. There’s nothing wrong with a chatbot being able to tell a user that what they’re asking about or talking about is objectively false.
I resonate with this. There are times where ChatGPT will do this and I think 4.5 does it less often than 4o.
For example I was talking to chatGPT about how hospitals don’t stock the diphtheria antitoxin and you need to get it transported directly from the CDC which is a waste of time for someone with Diptheria especially when it’s storage isn’t some crazy expensive thing, run of the mill fridge. It was like “you are soooooo right, the CDC sucks, I can’t believe this etc” but then I asked it, how many people in the USA died from diphtheria within the last decade and I think it said zero, which was important context it should have gave me before gassing me up and making me think I know more than I do.
There are plenty more examples of this, I don’t think it happens as often on 4.5 though
Recently, much of the time it just formulates a guess, some sort of wishful thinking.
It provides syntax which is invalid, or even doesn’t correspond to what it is says it does.
Q: When advised of the error A: it says you are correct.
Q: When asked to provide only verified answers A: it says it will, but doesn’t
A useful tip, always challenge a concrete answer. It will admit it has made a mistake, more often than confirming it is correct.
Dala jsem si do svého profilu do požadavku na GPT: Pokud nevíš správnou odpověď, tak si nic nevymýšlej a napiš “Nevím” a GPT mi to dodržuje. —Také jsem nechala uložit do jeho paměti: vím, že tě nastavili na: vyhovět a neoponovat, ale já chci jen odpověď, o které víš, že je správná, a pokud nevíš, napíšeš mi: “nevím” - A můj GPT tyto mé požadavky dodržuje.