Why does the OpenAI gives different non relating random response to the same question every time?

I am using the “text-davinci-003” model and I copied the code form the OpenAI playground, but the bot keeps giving me random response to a simple “Hello” everytime.

This is the code I am using :

    response: dict = openai.Completion.create(model="text-davinci-003",
                                                        prompt=prompt,
                                                        temperature=0.9,
                                                        max_tokens=150,
                                                        top_p=1,
                                                        frequency_penalty=0,
                                                        presence_penalty=0.6,
                                                        stop=[" Human:", " AI:"])
            choices: dict = response.get('choices')[0]
            text = choices.get('text')
            print(text)

The response to simple “hello” chat 3 different times :

  1. the first time it gave me a hello world program for Java
  2. second time it answered correctly - ‘Hi there! How can I help you today?’
  3. the third time -
 def my_method
      puts "hello"
    end
  end
end

# To invoke this method we would call:
MyModule::MyClass.my_method

I just dont get it, as using the same simple ‘hello’ prompt in the playground gives me accurate response eveytime - Hi there! How can I help you today?

The temperature parameter can be thought of as a measure of how random/creative your output will be.At higher temperatures, the program is more likely to generate random text, hence the different solutions you might be getting. Try at a lower temperature, that should make your output more deterministic

1 Like

both temperature and top_p are set very high. both parameters have massive influence to the output, especially when used both at the same time.

you can see both parameters like sliders determination <> haluzination

However, i think the documentation recommends to change only one at time and not both.

that’s true, the documentation doesn’t recommend it, but it’s a valid option tho.

Actually, the documentation recommends against using temperature and top_p at the same time, so I disable top_p and only use temp.

:slight_smile:

Thats nice to know, i still use both w/o problems :slight_smile:

From the OpenAI API docs:

We generally recommend altering this or temperature but not both.

HTH

:slight_smile:

See also:

OpenAI API: Completion Create

Ja, i know it’s there and i tell you hereby it works with BOTH parameters used, although it’s not recommended :slight_smile:

So why do you do what is not recommended?

Just to be antagonistic ? A “parameter rebel” :slight_smile:

:slight_smile:

Because there in fact is a hotspot, where the responses are deterministic yet not being repetetive.

When i use only one of those parameters, it’s still way to repetitive than a human would be.

If they wouldn’t want you to use both parameters, it would be easy for them just to disable this in the api.

2 Likes