I get the same response in each API call!

I have a simple python script that suggests sub topics in relation to a main argument by calling gpt4 api.
What’s incredible is that everytime the script generates another suggestion, most of the time it is quite exactly the same as the previous one.
And this happens each single time I do this.
For example, this is the main topic provided: “biography of famous people”, and these are the suggested topics:

“Exploring the Rise of Albert Einstein: Triumphs and Challenges in the World of Physics.”

“The Life and Legacy of Albert Einstein: Unveiling the Genius Mind Behind the Theory of Relativity”

“The Unseen Journey: A Deep Dive into Albert Einstein’s Formative Years”

“Unraveling the Genius: The Life and Accomplishments of Albert Einstein”

As each api call is indipendent from the previous one, how is this possible that I always get suggestion about Einstein?

This is the very simple script:


prompt = f"Please suggest me a topic for a short {video_type} based on this main argument: {general_topic}. Provide me the topic in a single short sentence without any additional comments. The topic must be based on concrete characters and facts and be focused on a simple and clear argument. Avoid abstract concepts."

“system_message”: “You are a documentary narrator. Your task is to write the text for the narrator of the documentary. Use a formal and rigorous style.”

specific_topic = generate_content(prompt, system_message)

def generate_content(prompt, system_message):
try:
openai.api_key = api_keys.OPENAI_API_KEY
response = openai.ChatCompletion.create(
model=“gpt-4”,
messages=[
{“role”: “system”, “content”: system_message},
{“role”: “user”, “content”: prompt},
],
temperature=0.8,
)
return response[‘choices’][0][‘message’][‘content’].strip()
except Exception as e:
print(f"An error occurred while generating content: {e}")
return None


If I paste the same prompt in chatgpt I get, as expected, different suggestions each time.

Can someone help me to solve this incredible issue?

Welcome to the forum…

I’d work on this a bit. Feed it to GPT-4 and explain what you’re trying to do, and try to get a better worded version.

Also try setting temp to 1.0…

Experiment a bit and let us know. Good luck.

Thank you but I think this is not the point and it seems to reveal a deeper issue.
This is not problem of prompt or temperature. The point is that it should be simply impossible that I get roughly the same response in the loop of repeating the call to the api.
In the millions of possible response, I get subsequently the same argument as a reply?
Each call to the API is independent, so no matter what I use as prompt or temperature, I should receive an answer that is totally unrelated to the previous one.
And indeed, if I past the same prompt in chatgpt, I correctly receive totally different responses each time.
So it seems there is an underlying different ways how call are managed trough API that leads to thus absolutely abnormal result.
No matter the general topic I propose, I receive as most of the answers very related sub topics.
I write “roman empire” as main subject, I get at least 5 responses that start with “the legacy of Roman Empire…”
It simply does not make sense.

That is a misconception. You supply the same input to an AI model, you get the same token probabilities out of it. It is precisely because the outputs are not dependent on the last response that they are similar or the same.

For each token the AI could produce, AI uses massive vector math to produce a score of each token it could make. Results are then normalized so the mass of them fit within a total probability of 100%.

image

The API feature logprobs shows the most certain choices. (bl is completed as “blissful”).

An AI that only answers the best way would only put out the word “content” in this rather ambiguous case of writing someone’s mood. This is the response if you set top_p to 0.0: you only get the first token possibility no matter how many runs.

With softmax and token sampling, we instead apply those certainties to be the likelihood of them appearing in text. 26% of the time the token produced here will be the top match.


Ask a silly question like “The most preeminent and notable scientist of the 20th century is” and get:

image

The other tokens are different ways of arriving at similar answers, from the way similar language would be finished within training.


So, you want different scientists? You’d better make a list of them first.