How to limit the response to only one complete result?

For example, when we train with many-shot like this:

Movie: Lord of the Rings
Snarky summary: Group spends 9 hours returning jewlery

Movie: The Revenant
Snarky summary: Leonardo DiCaprio wanders a frozen wasteland looking for an Oscar

Movie: The Departed
Snarky summary: Spied upon! Spied upon! We’re always spied upon. "Yes, but who the hell is doing the spying?"

Movie: Avatar
Snarky summary: Obnoxious blue aliens fuck up nature

Then we ask like this:

Movie:Pretty Woman
Snarky summary: Hooker with heart of gold falls for rich asshole. We all feel bad for the hooker.

Movie: The Wolf of Wall Street
Snarky summary: Jordan Belfort and his douchebag buddies do douchey things

Movie: Deadpool
Snarky summary: Deadpool does things Deadpool-y

Movie: Dr. Strangelove
Snarky summary: Nuclear bomb freaks out and kills everyone

We get not only for asked “Movie:Pretty Woman”, but the engine continues generating more movie names and snarky summary, like the example above.

What I want to achieve is to let the engine generate only one answer at a time, instead of generating automatically. And limiting number of character results in incomplete/broken responses.

What am I missing here?

The main way to control the length of your completion is with the max tokens setting. In the Playground, this setting is the “Response Length.” These requests can use up to 2,049 tokens, shared between prompt and completion.

You can try increasing the response length to avoid incomplete responses.

You can provide stop sequences like the return key and “Movie:”.

If the API attempts to generate another answer, or even create a new line, it will run into the stop sequence. You’ll want to preface the prompt with “Snarky summary:” so it doesn’t immediately generate the new line and stop.

Personally, I add <> to the end of each example and use that as an end tag. The reason I do this is because, with repetition, even high temperature prompts pick up on the pattern. Otherwise you might have some random output where GPT-3 starts to babble.

EDIT: the token I mentioned is <<END>> it did not format correctly above.

For this particular case, you can just include \n as a stop token.

In the more general case, you want to make sure that your previous examples have a similar structure that signal the end of a response. A token like @daveshapautomator, putting the answer in quotes, or identifying the start of the next example with “Movie:”. Beware that the choice of structural tokens also influence what kind of responses you get. If it sounds more like a verbatim reply, it may be less formal. If it looks more like a program, it may generate artifact that a program would have, etc.

Thank you for all for help. Going to try the solutions presented.