"gpt-5": how to get the almost the same result for the same input params?

I want to know how shall I get the exact response from gpt-5 by putting in the same parameters.
because I see I can’t set ‘temperature‘ for any of gpt models.

Perhaps you should write code that actually can be read?

Since this is inpenetrable, let’s have AI at least make the snippet readable here:

# method 1
response_text_1 = []
for Ai in A:
    # Ai is a series numbers which can be used to plot a figure.
    ....
    text_prompt_from_Ai = ..Ai number string..
    ...
    messages = [
        {
            "role": "user", 
            "content": [
                {
                    "type": "input_text", 
                    "text": text_prompt_from_Ai
                }
            ]
        }
    ]
    
    messages[0]["content"].append(
        {
            "type": "input_image", 
            "image_url": f"data:image/jpeg;base64,''"
        }
    )
    
    response_text_1.append(
        self.client.responses.create(
            model='gpt-5-nano-2025-08-07', 
            input=messages
        )
    )
# method 2
response_text_2, response_img_2 = [], []
for Ai in A:
    # Ai is a series numbers which can be used to plot a figure.
    ....
    text_prompt_from_Ai = ..Ai numbers string..
    ....
    text_prompt_from_Ai_2 = ..pure some prompt..
    ...
    fig_from_Ai = base64.b64encode(image_Ai).decode('utf-8')  # a figure which is plotted by Ai numbers.
    
    messages = [
        {
            "role": "user", 
            "content": [
                {
                    "type": "input_text", 
                    "text": text_prompt_from_Ai_2
                }
            ]
        }
    ]
    
    messages[0]["content"].append(
        {
            "type": "input_image", 
            "image_url": f"data:image/jpeg;base64,{fig_from_Ai}"
        }
    )
    
    response_img_2.append(
        self.client.responses.create(
            model='gpt-5-nano-2025-08-07', 
            input=messages
        )
    )
    
    messages = [
        {
            "role": "user", 
            "content": [
                {
                    "type": "input_text", 
                    "text": text_prompt_from_Ai
                }
            ]
        }
    ]
    
    messages[0]["content"].append(
        {
            "type": "input_image", 
            "image_url": f"data:image/jpeg;base64,''"
        }
    )
    
    response_text_2.append(
        self.client.responses.create(
            model='gpt-5-nano-2025-08-07', 
            input=messages
        )
    )

So you’re adding some empty image string to some sequence of numbers and the AI is supposed to figure out what’s happening here when I can’t?

I guess the main thing to realize is that gpt-5-nano is going to be a poor token predictor, and then it runs with no sampling controls by API parameter, and can have LOTS of reasoning between the input and the output, so naturally the randomly sampled tokens you finally see are going to be significantly different between runs.

I have a friend named Claude that can put together a better picture than trying to compare a diff between two unexplained code snippets. Here’s the results of that analysis:

Looking at this code, I can explain what’s happening and why they’re confused:

What’s Different Between Method 1 and Method 2?

Method 1:

  • Sends text prompt + empty/placeholder image (note the '' empty string in the base64 data)
  • One API call per iteration

Method 2:

  • First API call: Sends text_prompt_from_Ai_2 + actual plotted figure (fig_from_Ai)
  • Second API call: Sends text_prompt_from_Ai + empty/placeholder image (same as Method 1)
  • Two API calls per iteration

The Key Difference

In Method 2, they’re making an extra API call with the actual plotted figure. This response (response_img_2) is being generated but then not used for anything - it’s just appended to a list.

The comparable outputs would be:

  • response_text_1 vs response_text_2 - these should be similar since both send the same text prompt with an empty image
  • But response_img_2 is completely different because it’s sending an actual image with a different prompt

Why Are Results Inconsistent? (The “response_text_1” vs “response_text_1_again” issue)

They’re also complaining that running Method 1 twice gives different results. This is normal LLM behavior due to:

  1. Temperature/sampling - LLMs are stochastic by default
  2. No seed set - results will vary between runs

What They’re Probably Trying to Do (But Doing Wrong)

I suspect they want to:

  1. Send the plotted figure with a prompt → get response about the image
  2. But they’re mixing this up and comparing incomparable things

The code is confusing because the empty image placeholder makes no sense, and they’re not clearly separating what they’re testing.