GPT-3 in practice over larger data sets

Is it standard practice to try to maximize the amount of space used in a prompt towards a structured set of instructions and examples and leave just enough room at the end for the return of a single data point, or are there examples of effectively getting GPT-3 to return a number of completions in one API call, then parsing the returned results into separated datapoints?

In other words:
Should you call GPT-3 one data point at a time to maximize reliability, or if not, what’s an example of a real-life tool where in order to use GPT-3 on say, 1,000 examples, they sent them to GPT-3 10 at a time?

Is it more common to have the same consistent set of examples to teach GPT-3 the general rules, or are there any examples of real-world use where the examples also change in the API call?

In other words, an attempt at a most canonical set of examples that can accommodate all new examples encountered, or just taking any “local” examples nearby any given data point and just asking GPT-3 to extrapolate?

It seems like both could have uses and I’d be interested in seeing some specific uses.

Really depends on your use case and requirements.