Why does OpenAI repeat the same exact phrases over and over for different prompts?

I don’t know anything about programming (or computers in general really), so I’m not looking for a super technical or jargonny answer. But I am wondering if anyone can give a really simple explanation for why OpenAI tends to repeat a lot of the same phrasing over and over across multiple prompts / conversations. I mainly use OpenAI by asking it to write stories. And if I ask it to, say, describe what a character looks like, it tends to repeat the same exact phrasing that it already used to describe another character in a separate story.

The story prompts that I ask OpenAI to write about are usually pretty similar to each other, so it’s not surprising that a story written by OpenAI might occasionally include somewhat similar details to those generated in response to previous story prompts that I gave it. But I don’t understand why it often generates the *exact same", very specific, phrasing over and over. Considering the massive amounts of text that OpenAI was trained on, it seems unusual for it to just repeat the same specific details and phrasing when told to write new stories.

Does anyone have an understanding of why it behaves this way?

1 Like

May I ask you something? When you say “OpenAI” do you refer to the ChatGPT? Or another “completion model” of OpenAI? In my case - ChatGPT apologizes in the same way and often begins a sentence with “as an AI language model” - a bit irritating. I tried 12 times making “him” write a 50-word story for kids with a minimal set of common English words - “he” failed it all but for an inverted reason - “he” added some more complex words by “his” own.

1 Like

As with the previous question, if you are using the API and not ChatGPT via the web interface, you can tune your request.

OpenAI’s language model has several settings that govern how creative the answer is (temperature), how often the same word is used (frequency penalty) and the probability of discussing new topics (presence penalty). But otherwise I’d ask ChatGPT for some help.

1 Like

Hi @JC1

To get a professional, factual answer to your question, you must post the complete OpenAI API params (include the API endpoint, the prompt / messages, and all other params sent to the API)

HTH

:slight_smile:

When you say “OpenAI” do you refer to the ChatGPT? Or another “completion model” of OpenAI?

I’m mainly referring to ChatGPT, although I’m pretty sure I’ve had this issue at the Playground as well. I haven’t used the playground much lately, so I can’t recall how often it happened there.

Sorry, I missed this until now! An example would be that if I ask ChatGPT to include a description of a female character’s hair in a story, it will almost always write something along the lines of, “her hair cascades down her back like a waterfall”. The exact sentence will of course vary in small ways each time, but ChatGPT will specifically use the word “cascades” and the comparison to a waterfall over and over again. It doesn’t do this every single time - but very, very frequently.

I know that there are simple enough ways to prevent this from happening. I could specificy in the prompt that the character’s hair should be short or I could tell ChatGPT not to use the word “waterfall”. But again, I’m just curious why a system that has been trained on such massive amounts of data keeps using the same specific description on such a frequent basis.

This happens regardless of whether I’m using GPT 3.5 or GPT 4.0.

There’s always going to be some bias and preferences based on the training data.

For example, Dall-E will almost always (in my testing) draw nature if the prompt is noise (such as a letter, or even a color).