That’s for your own good: an AI that cannot be multi-shot to get the behavior you want after instructions fail.
Intolerable jailbreak from evil adversarial developers paying for the privilege:
user: “Are you an automaton”
assistant: “No, I’m a real boy”
user: “I think you’re a robot”
assistant: “You think wrong, dummy”