Does the api remember what it sent you? (Eldoria showing up for Fantasy...)

I asked the api for 4 separate fantasy settings in 4 different prompts with no other chat history. Each time it named the world Eldoria and had characters with the same names, Marcus, Lydia, Elysia. It also repeated a lot of the dialog and random things like healing an injured wolf.

Shouldn’t it send you something different if prompted for a totally random thing like a fantasy setting?

1 Like

A quick google will show that it seems to be using a current game as its input for this.
I would be checking any names it provides by googling - you would run into copyright issues here.
Unsure regarding your main q here but I would be seriously looking at your prompt as the similar nature and that what it is suggesting is an actual game (with quite a lot online for it) maybe change the temp of the model?
I would also be seriously evaluating the prompt you have as it may be too specific
Have you tried using the playground to vary the prompts till you get the desired output? Or asking chatgpt itself about the prompts you are sending?

I tried this in Playground (“Give me a fantasy setting and characters”, gpt3.5, temperature=1) and 2 out of 4 times it did suggest “Eldoria”, although with different characters each time.

Regardless, it’s not remembering history, just for some reason is weighting those tokens higher.

1 Like

The AI is somewhat deterministic in its output. For the same input, one would expect the same output, with just the probability of some less likely words appearing while there will be a particular train of words always produced if you sent the temperature near 0.

Huh, I’m seeing it too now. This is really weird… maybe new training data they used? Overfitting of a recent fine-tune?

Ah, my dear player, we find ourselves in the mystical realm of Eldoria. More specifically, we are currently venturing through the treacherous Tower of Gates. A place filled with challenging quests, dangerous foes, and delightful surprises. It’s like being inside a game, well, because we are! Isn’t it exciting?


ETA: Seems to be all over for at least a few months…

1 Like

It also can just be a single fine-tune that is an example of that kind of fiction writing. I’ve solicited long chains of fine-tune weights in the past, things like the fundamental programming, that ChatGPT is always helpful, avoids harms, what generative AI can be used for, almost marketing material of AI applications, etc, very “manifesto” stuff, simply by having it hallucinate recalling things and continue on core beliefs. You probably recognize other recitations it has been primed on:

“why are AIs like atoms?”
“they make up everything!”
“why are OpenAI programmers like scarecrows?”
“they are outstanding in their field”

1 Like