Prompt Engineering Libraries for calculating budget & distributing load across AI Models. Flow-Prompt

  1. Hey, do people use any libraries for prompt engineering?
  • To calculate automatically budget
  • To fit dynamic data in prompt until it fits
  • to skip if something doesn’t fit.
  • to include the usage of different AI models, like to use gpt4, but in case of rate limits any outage to call gpt3. and to get that automatically

I couldn’t find that lib on the market, so we created our own version. When working on 2 startups, i couldn’t find such a lib. So we opensourced our approach. Shortly your prompt could look like that:

# id of the prompt
prompt = PipePrompt('merge_code') 

prompt.add("It's a system message, Hello {name}", role="system")

# Be sure that indexed context is prioritized first, add it until it fits.
# Start with `The closest indexed context`, if there are no indexed context do not add the "closest indexed context"
prompt.add('{indexed_context}',
    priority=2, 
    is_multiple=True, while_fits=True, in_one_message=True, continue_if_doesnt_fit=True,
    presentation="The closest indexed context to the user request:"
    label='indexed_context'
)
# add if in another msg was fitted fully by priority
prompt.add('{messages}', priority=3, 
    while_fits=True, is_multiple=True, add_in_reverse_order=True,
    add_if_fitted_labels=['indexed_context'],
    label='last_messages'
)
# add assistant response in context if it's in the context
prompt.add('{assistant_response_in_progress}',
    role="assistant",
    presentation='Continue the response right after last symbol:'
)

Also we added a support of behaviours, with different AI Models. If the first response from AI Model has failed, it will make another request to another model. The distribution is made based on weights.

Please give us feedback, we work on the platform. We’re interested in having an opportunity of dynamic prompts, without redeployment.