Running same prompt multiple times gets worse

I have found that when running the same prompt multiple times, many times the results get shorter or worse each time I re-run the prompt.

I’m assuming that OpenAI is keeping some sort of history on a per user basis, and that each time I rerun the same prompt, OpenAI is interpreting that as some level of dissatisfaction with the prior result.

It is most noticeable if I run a prompt about writing code. If I ask it to write a component that does X, it will generally do a great job the first time. If I run it a second time, I will get a less robust result. A third time may give me a tangent of my intended result (like returning the associated html view of a component).

But this happens with text as well.

My question is - is that what is going on? That there is some sort of per-user history that can alter future results based on past prompts?

If this is so, is there a way to reset the api so that I can get a result as if it’s the first time I ran the prompt?

Thanks in advance!

1 Like

Turn the temperature down to 0 to see consistent behavior. Also, just practice writing better prompts. Consistency is as much about what you put in.

Thanks David! Turning temperature down certainly helps for prompts asking for code and other less creative requests.

I’ll keep an eye on it & play with temperature.