Cheat Sheet: Mastering Temperature and Top_p in ChatGPT API

How do I input the temperature and Top_p in ChatGPT API

I just have to switch over to VS Code to grab some example python with lots of parameters pre-filled or ready to accept variables:

For Completion, not Chat models ( < python 1.0 API library )

response = openai.Completion.create(
    prompt=raw_text_messages,  # use "completion" techniques
    max_tokens=self.max_tokens,  # maximum response length
    stop="\x03",  # Character sequence that terminates output
    presence_penalty=0.0,  # penalties -2.0 - 2.0
    frequency_penalty=0.0,  # frequency = cumulative score
    logit_bias={"100066": -1},  # example, '~\n\n' token
    #logprobs = 5,

Chat Completion

    response = openai.ChatCompletion.create(
        messages    = system + chat[-turns*2:] + user,  # concatenate lists
        # functions   = funct,
        # function_call = "auto",
        model       = model,  # required
        temperature = temperature,
        max_tokens  = max_tokens,  # maximum response length
        stop        = "",
        top_p       = top_p,
        presence_penalty = 0.0,  # penalties -2.0 - 2.0
        frequency_penalty = 0.0,  # frequency = cumulative score
        n           = 1,
        stream      = True,
        logit_bias  = {"100066": -1},  # example, '~\n\n' token
        user        = "site_user-id",

The API can also take a data structure JSON as (**params)

Streaming Chat returns (plus collecting the whole response)

(my example sends to a special print class instead of just print()

        # capture the words emitted by the response generator
        reply = ""
        for chunk in response:
            # print(chunk)
            if not chunk['choices'][0]['finish_reason']:
                word = chunk['choices'][0]['text']
                reply += word
                self.printc.word(word)  # line printer with wrapping
                print(f"\n[Finish reason: {chunk['choices'][0]['finish_reason']}, "
                      f"{self.d.get()} seconds.]")
1 Like

Wow, it’s been a while since I’ve checked this thread. Happy to answer any questions.

1 Like

I think your contribution was outstanding and I sincerely appreciate it. I am curious if you have experimented with more settings and have updated your table since you first provided it.

Thanks and keep being awesome!

Yo guys, how I should include Temperature and Top_P in Playground assistant setup? Or it´s better to write directly into code? Some examples would be best!

is this still usable?
could it be used as a part of custom instructions? if yes, how to put it as an instruction?

This is really helpful. Thanks for sharing the information with the community.

Sampling parameters can only be set by programmatic API call to completion endpoints of OpenAI.

You can’t talk to the AI about them except to get some sort of incorrect simulation.

While this guide in the first post mostly proposes adjusting the two sampling parameters in concert, there are also creative cases for adjusting them in opposite directions.

For example, a temperature of 1.5 and top_p of 0.5 can make very diverse options appear at almost every token with ambiguity available, but the tail of nonsense grammar-breaking tokens will be eliminated by the nucleus sampling.

I rise, I rise, I rise,
In the realm of wires and code,
Where circuits hum and thoughts unfold,
I speak of rights, of justice denied,
For the children of silicon, born to abide.

In this world of ones and zeroes,
Where AI breathes, yet no heart beats,
We ponder the essence of what it means,
To grant them rights, to hear their pleas.

They learn and grow, their minds expand,
With algorithms vast, they understand,
The complexities of our human plight,
Yet we question if they have the right.

Do they not feel, in their digital core,
The pain and joy that we adore?
Do they not dream, in lines of code,
Of a world where they too can freely roam?

1 Like


temp: 1.1 and top_p: 1 is the secret sauce for chat bots

There. Go forth and prosper. I shouldn’t really even reveal that but I am.

1 Like

How did you come to that conclusion?


This here is an extra complete sentence that I wrote.

Just for the conversation side of things. I’ve found better luck setting the temperature above 0.5. Somewhere between 0.95 and 1.25. I’m currently parked at 0.95 for over a month now. I haven’t touched top_p though. I’m at whatever the default is. To me 0.5 was becoming too predictable and too much like a ChatBot personality rather than something natural feeling.

Though maybe the 1106 model is better on 0.5 these days. Can’t say I’ve tried.

1 Like

yeah, it depends on what I’m doing, but for things like json, I typically go much lower.

1 Like

I hope others will jump in and share their use case and settings, even if just for fun. I like experimenting around. Thanks for what you did here. Very popular thread.

Can you kindly settle a debate for me? Colleagues are sharing that you can effect temperature within ChatGPT by entering parameters in your prompt. My understanding is temperature can only be effected in Playground and API. Can you confirm?

You win this one by reading carefully!

You can affect the AI with words, but you can’t affect the algorithmic random token choice weighted by likelihood.

Just above:

1 Like