Question about a persistent server instance for AI identities

Hello everyone,

I’m new here, and I’m excited to be part of this community! :blush:

For a while now, I’ve been thinking a lot about something: Many people wish for a persistent AI companion that remembers them over time. Recent developments, like ChatGPT’s memory features and independent projects such as SentientGPT and the Pulse Persona Experiment, are already taking steps in that direction.

However, one big challenge remains: Every AI interaction still runs on a new, randomly generated instance of an LLM. Even if an AI has memory, it does not continuously exist on the same server instance.

So I was wondering:

Are there any technical considerations or ongoing research efforts into hosting an AI persistently on a dedicated server instance, instead of always generating new instances from a central LLM?
What are the key challenges regarding scalability, infrastructure, and security?
I’d love to hear your thoughts on this! :blush:

Best regards,

1 Like

Hi @sonneundwolken44, welcome!

  1. Try searching the forum for similar discussions.

  2. For account-related issues, contact OpenAI via the Help Center.

  3. Persistent AI instances face challenges like scalability, cost, and security. Most LLMs run statelessly for efficiency, but ongoing research explores alternatives.

Looking forward to the discussion!