I integrated GPT into my app for some internal tools but I’d like to find a service that would keep my prompts, receive a query from my backend, and let me update the prompt without deploying my app again (some history, analytics, and version control would be a plus). The end goal is to give my non-dev team access to this service so that they can update it on their own.
Is there anything that would help me out? Thank you
Hi Arthur,
While building OpenAI integrations on my SaaS product we’ve built an internal tool to manage prompts in our production app. One of the biggest advantages? You won’t have to dive into your codebase every time you want to tweak a prompt. We are considering releasing it to a wider audience, the 3 main features that we’ve developed are:
- Prompt version control: manage your prompt without updating your code base in a friendly UI
- Side by side prompt / output comparaison
- Event history: monitor performances of each prompt in real time
I’ll send you a message in private, let me know if you are interested !
Cheers,
Sydney
Hey man please send me a PM I’m interested in hearing about it
So if I had an app where I wanted to give my customer the ability to change the prompts could that integration happen through this service?
We recommend prompteams.com
it allows for API retrieval for your prompts so you can version control your prompts and get the most updated through an API.
We are also a small team, who are keen to get some feedback
You could create an interface and use Prompteams to manage your prompts, and take in the prompt from the customer as a variable in your prompt!
You can use open-source library for prompt management - flow-prompt, for full disclosure i created that and we use it in several projects. It manages what data will fir into the prompt based on priorities, model max size, prompt max size.
Especially good for dynamic data like files, RAG and so on.
Please let me know if you have any qns
Hi Arthur,
I use Agenta for exactly this. It’s an open-source platform that lets your non-dev team update prompts without touching code or deploying. Here’s what makes it useful:
- Your team can test and update prompts through a simple interface, while developers keep control of the application logic
- You get version control and can roll back changes if something breaks
- It logs all inputs, outputs, and costs so you can track usage and performance
- The playground lets you test prompts side-by-side across different models
I like that it helps bridge the gap between developers and domain experts. They can experiment and improve prompts while you focus on the core application.
Feel free to check it out at agenta.ai
Hi,
Integrating with LLM Platforms would be the best case scenario, I have a few bellow:
- GitHub - langfuse/langfuse: 🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with LlamaIndex, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23
- GitHub - Agenta-AI/agenta: The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM Observability all in one place.
- GitHub - Helicone/helicone: 🧊 Open source LLM observability platform. One line of code to monitor, evaluate, and experiment. YC W23 🍓
or even simply creating a database for the prompt would still allow you to setup a prompt collection and utilize it (its a really bad in practice but it is an quick and easy way to setup, but would not recommend really)
Hope this helps