Hi. I’m developing an app via API. The text output that GPT 3.5 turbo is returning has identical text to a previous API call. For example, suppose the app sends a prompt to GPT to return a random list of 10 cities to visit next summer anywhere in the world (so the question is very broad). GPT returns a list of 10 random cities. Next time I use the app to generate a list, based on the exact same prompt, I am getting a % of answers that are the same as the previous response, i.e. the first list included Chicago and the second list also includes Chicago. Note: this is just an example, not my actual prompt). Wondering if the frequency and presence penalty settings can solve this problem. The issue is to solve similar text appearing among separate API calls. So, a related question is whether, for user A, GPT remembers the data previously sent to that user (assume this is a Chrome App a user can use several times per day) from previous API calls? Or does GPT have no memory of the data sent out to a particular user on a previous API call? I want the user to use the Chrome app and not get repeated text they saw when they last used the app. I hope this makes sense.
How to stop results from separate API calls that use the same prompt having the exact same answers or text
There are two things going on.
1st like you mentioned, there are parameters you can change like temperature, presence_penalty and frequency_penalty you can manipulate to produce a more varied output.
second is ChatGPT API has no inherent memory of your previous calls. You have to build a chat history yourself by getting the output message and appended to your chat history before you append the next input message and send the whole thing through ChatGPT again.
For example, if you want 10 completely new cities, you can do something like this:
*initalize chat history
System Message: “You are a travel advisor helping the user to play a trip.”
*append system message to chat history
User message: “Give me ten random cities to visit.”
*append user message to chat history
*send chat history to ChatGPT
ChatGPT:" Here are ten cities you would want to visit…"
*append ChatGPT response to chat history
Then on the 2nd iteration you can say “I don’t like those cities, give me ten new ones.” And ChatGPT would have access to the previous conversation you had.
As you can tell, Chat History would keep growing longer and longer and eventually too long for ChatGPT input token limit. So you have to implement a function to limit the chat history length. For my very simple application, I just keep the last ten messages.
It’d be good to share your actual use case if you can. But know that if it is similar, GPT is bad at random selection of things. A better use might be to ask it for 50 city names and you parse the list and randomly select 10 in your code.
And as mentioned above, GPT has no memory and you can try tweaking those other settings and see if you get better results.
I’ve had luck by adding a small random prompt in front of each request.
fmt("Case %d (%s):\n", random(), list_of_words.random_item())
It doesn’t have a lot to do with the actual request, but it seems to introduce enough randomness to vary the output generation.
Thank you for taking the time to write such a thoughtful response. Much appreciated and what you say makes sense.
Great, I’ll give this a try. Thank you kindly for sharing this.