Multiple prompt responses everywhere

Hi there!

I’ve seen that some popular products using Open.ai opt for showing multiple results to the user.
I am wondering, is that the result of simply making the same request to OpenAI several times and showing the results?

I thought at first that they would use some kind of structure like shown on how to improve prompt design, but in many situations, the output would be too long for it it be the result of a single request (specially after checking the Going live requirements)

Isn’t it quite expensive for them to return around 7 results per query? Several companies like Copy.ai, Peppercontent and Copysmith does that for instance.

Also, they should need to do the queries in parallel so the user doesn’t need to wait for too long?

2 Likes

I see!
I was too focused on the Playground and didn’t realize on how ‘n’ worked.
Thanks!

1 Like

I’m wondering, is there any reference implementation to follow on how to implement the content filter when n>1 ?

1 Like

Hey @jamalavedra you can also send an array of prompts to the API and get an array of completions back. This can be an alternative to making multiple requests in some cases.

2 Likes

I’m stuck with something similar. Code works fine when I give n:1. I’m getting 400 bad request error when I change it to n:2

This is what I tried:

const gptResponse = await openai.complete({
                       engine: 'davinci-instruct-beta',
                       prompt: "This is a test",
                       maxTokens: 100,
                       temperature: 0.6,
                       topP: 1,
                       presencePenalty: 0.9,
                       frequencyPenalty: 0.9,
                       bestOf: 1,
                       n: 2,
                       stream: false,
                       stop: ["\n"]
                   });

Is there something else I need to add to generate multiple responses with a single prompt?

1 Like

@m-a.schenk thanks, that worked :grinning:

1 Like