Behavior that makes model completely useless in some common situations

Why is this so hard to enforce? I provide specific instructions to ChatGPT, like

“Don’t give any additional advice, limit your answer to strictly addressing the question.”

but he fails to do. Then I ask him what were my instructions. He quotes them.

Then I tell him that parts of his answer (~90%) are going against the instructions. He seems to understand and then I ask him to modify the text of my instructions to make sure my request is fulfilled. He does, I update it, but he keeps giving advice. It’s like there is some higher priority rule that forces him to always give advice according to mainstream opinion.

This makes it useless if we are not in pursue of the mainstream boggus or useless opinion.

Can’t an AI stop being more paternalistic than paternalistic humans?

It seems to me a huge waste of capability, instead of letting him do some inference, it appears like he’s a censorship employee.

Part of my reasoning is most likely flawed because I’m not an AI expert. But I don’t think my perception is completely wrong.

Thanks.