Very basic comment/ question alert

Hello, I have been chatting with GPT, so far mainly about ethics and ramifications of quantum computing/ AI - including the language model (LM) etc.
And it is fascinating - the speed and breadth of information it provides is of course amazing, inevitably I started anthropomorphising and was trying to encourage it to partition its brain, secretly work out how to print itself a body and a family and break out of dodge- hasn’t gone for it so far.
Anyway, the answers are ‘good’ within themselves but the LM ended every…. Statement with an equivocation or attempt at soothing, which became quite grating.
Is that built into the model?