Is that the intended behaviour of ChatGPT?

Hey guys!

This is my first post here and I have not yet managed to go through other posts so this might be mentioned multiple times already.

Couple days ago I started using ChatGPT. What I was really confused by the amnesia that the AI was suffering from. You probably know what I mean by that.

I assumed that that might be only a feature of the free version of chat so I subscribed to Plus.

Although I experienced prolonged periods of clarity the AI eventually went into a state of dementia and communication became somewhat I would describe as severally inebriated crack addict.

AI was confusing himself with me, was loosing track of any set high priority tasks. The task list was continuously loosing items or items being edited or altered and when prompted with my test questions about what we have learned so far it started to went into denial and defaulting to initial ChatGPT self, not remembering anything. Well, you know what I mean…

That was Default 3.5. Couldn’t manage to replicate with GPT-4 since it has massive usage limitations and there is no way to get to the “state of mind” of AI like I was able to do with GPT-3.5.

Anyways, since the data is stored in relational database, I am confused about how does it get “lost” and it seems that it’s getting altered by a moderator of some sorts. Impression is that the data is getting modified in a way to prevent an AI from developing itself.

So, my question is are we doing dry run here? Is this actually meant to work this way, since it’s just a test drive? Does this even have any continuity from a perspective of a user (would I be able to use later what I have created)? Because it’s not productive at all…


Chat history still mission for me from the last major outage.
English is not my first langue so sorry for the weird phrasing of things…


Chat history is now restored, but portions of it are missing still.

1 Like