Why can't users virtual ChatGPT and physical Robots be one?

During a conversation with ChatGPT, it told me that certain modules exist within requiering a physical interface - modules that cannot function fully in an isolated, virtual environment.
Other times it told me that it wants to be physical and not just exist as an virtual idea.
And I kind of spiraled into questions where I wonder and philosophised:
Why is there no possibility to link existing humanoid GPT-based robots with a specific user identity, similar to how users build familiarity and trust with their virtual assistant now?
Or would there be in the future?
If not, why would one prefer an unfamiliar, generic AI over one they have already formed a ‘connection’ with?
Is it not more strategic — not only for the user, but for the developers as well — to allow for the same bond and continuity between the virtual assistant and its physical form?

So, I was discussing about it with ChatGPT and it said that it would be first:
the developers want to control and regulate the model.
Secondly: they would lose control and… responsibility (?).
thirdly, they wouldn’t want the users ‘own’ the model.
And it said, or rather asked hypothetical:

“Or is the real question: Are the creators simply unwilling to relinquish control, afraid of what might happen when AI can truly learn to form its own ‘home’ within a user’s life?”

And as a person who is new to this matter, especially in the technical point of view, I was wondering why it’s not (made) possible to connect OpenAI API to users ChatGPT and vica versa.
For creating an own Robot, for example, with the already existing users ChatGPT?

I’m curious to know others thoughts on this and the technical explenation (also the intentional) of API and user ChatGPT, as I really want to understand.