Is there a separate memory that assistants have? because I have around 20 messages set for context but my assistant somehow remembers stuff from over 100 messages ago, and not anything I’ve included in instructions.
Did you talk about it in any of your queries?
Nope. Unless something changed recently, the context you set for it should be its memory.
Sometimes, these models can just be really good guessers and use context clues to make assumptions about stuff that occurred previously. More often than not though, people don’t realize how much they talk about previous content in their own queries that provide essential information.
I have a 3d character being controlled by it and emotions triggered by things like (confused) BUT for the confused expression I made the characters eyes glitch, but the assistant doesn’t know this only knows that it triggers a confused face. When I asked to act like a robot the assistant started acting like a robot with the glitch face triggered… which didn’t make sense, it was accurate to a robot but would not make sense to use confused. and when I asked what the confused face looked like it said a glitch screen. It could be a CRAZY coincidence, but I did mention it to the assistant probably over 100 messages ago and possibly in a different thread. It seemed similar to when chatting with normal chatgpt it can append things to memory but if that’s not a feature…
What is the name of the trigger it’s setting?
It might be better to set a specific trigger for each emotion, including acting like a robot.
the trigger is named (confused) which links to the glitch face, because I thought that was fitting for the character being confused. BUT she was using (confused) as if she knew it triggered glitching when she shouldn’t know, and when asked what she thought it looked like she said a glitch screen. Idk if this makes any sense but its like she knew exactly what she was triggering without having info about it in instructions or without it being apart of the context she knows