Api configuration endpoint (for prompt engineering)

I’ve been looking at the various use-case examples ppl have been coming up with

Current we have fine tuning, embeddings and edits.

A hot topic is preconfiguring gpt with additional contextual information as to act as a teacher, guide, or whatever…also known as prompt engineering

I suggest adding a endpoint to be able to store these “prompts” server-side

Like this we could send a completion with some input text, and reference a stored prompt configuration

This would prevent us from having to send the full preconfiguration plus input for every request and maxing out on max tokens …and save some bandwidth

Example:

Preconfiguration (xyz) :
Pretend you are a preschool teacher, and explain the following question so that a child can understand it.

Input:
What is quantum computing

Api request:

{
configuration : “xyz”,
prompt : “What is quantum computing”
}

Or even better

Preconfiguration (xyz) :
Pretend you are a {0}, and explain the following question so that a {1} can understand it.

Input:
What is quantum computing

Api request 1:

{
configuration : “xyz”,
interpolations : [“preschool teacher”, “child”],
prompt : “What is quantum computing”
}

Api request 2:

{
configuration : “xyz”,
interpolations : [“quantum physics professor”, “2nd year quantum physics student”],
prompt : “What is quantum computing”
}

These are very simplified examples.
I’ve seen configurations span multiple paragraphs with instructions ranging from language output, to output formatting style, and much more.

3 Likes