I’m trying to get a diversity of results from the GPT-3 API.
Consider this simple call:
import os
import openai
import dotenv
dotenv.load_dotenv()
openai.api_key = os.environ.get('OPENAI_API_KEY')
response = openai.Completion.create(
model="text-davinci-002",
prompt="Tell me a joke.",
temperature=1,
max_tokens=20,
top_p=1,
n=4,
best_of=5,
frequency_penalty=1,
presence_penalty=1
)
print(response)
Here is a typical output for it:
{
"choices": [
{
"finish_reason": "stop",
"index": 0,
"logprobs": null,
"text": "\n\nWhy did the chicken cross the road?\n\nTo get to the other side."
},
{
"finish_reason": "stop",
"index": 1,
"logprobs": null,
"text": "\n\nWhy did the chicken cross the road?\n\nTo get to the other side."
},
{
"finish_reason": "stop",
"index": 2,
"logprobs": null,
"text": "\n\nWhy did the chicken cross the road?\n\nTo get to the other side!"
},
{
"finish_reason": "stop",
"index": 3,
"logprobs": null,
"text": "\n\nWhy did the chicken cross the road?\n\nTo get to the other side!"
}
],
"created": 1669369481,
"id": "cmpl-6GPY96AQUyCul9vTmo93oN84Kh96Y",
"model": "text-davinci-002",
"object": "text_completion",
"usage": {
"completion_tokens": 95,
"prompt_tokens": 5,
"total_tokens": 100
}
}
i.e. 4 times the same result, which (at least to me) defeats the purpose of the n
setting.
(If I’m “lucky”, I’ll get a wee bit of diversity, with maybe 3 times the same joke and once a different one.)
Is it possible, through the API, to get DIFFERENT results? If not are there any plans for it?
I understand that making multiple calls to the API (e.g. asking for a list of jokes + some prompt engineering) may lead to the desired result, but it would be so much more convenient and cleaner to have this from the get-go in the API.