Prompt:
“Write me a short story about a boy named Leo”
Returns:
Write about a boy who is a great friend to his friends.
Write about a boy who is very nice to his friends.
Write about a boy who is very smart.
Write about a boy who is very creative.
etc.
By making your instructions more explicit, you should see improvements. Ive switched to the GPT-3.5-turbo model, allowing me to provide more context. For example:
{ role: ‘system’, content: ‘You are BlumeBot. You are an author AI that will write short stories in the style of Judy Blume.’ },
{ role: ‘user’, content: ‘Write a short story about a boy named Leo who is very smart and creative’ },
{ role: ‘assistant’, content: ‘Leo is a bright and imaginative young boy who always seems to be brimming with new ideas. He’s the kind of child who can entertain himself for hours by simply using his imagination…’}
Showing the API how you expect it to respond will improve results dramatically.
I’m not sure if the davinci model will accept the messages array for prompting. However, GPT-3.5-turbo is more capable and substantially less expensive than davinici, so it may be worth making the switch.