Managing prompts in production

Hi all,

We previously worked with a team that was storing prompts as JSON in their database and struggling to iterate and migrate to production. So we looked to solve this by creating an enterprise prompt management platform, Promptly-HQ. Today, Promptly-HQ integrates with OpenAI and Azure OpenAI, has team and project support, scalable APIs, and prompt versioning, but we’re looking to add additional features. We would love to hear your feedback, use cases, or anything else you have to say. We’re looking to help developer and product teams, so don’t hesitate to reach out if we can help :slight_smile:

Thanks!

2 Likes

Why would you store them in a database? That sounds super cumbersome.
I store them in JSON files that get managed in git, and use a template renderer, and it works great.
I then log the prompt version and end results, and plot user ratings in a dashboard in Observe, so I can track quality over time.

2 Likes

It was cumbersome for the team, but their solution was multi-tenanted and need to retrieve prompts for customers after an event arose. That does sound like a decent pipeline to release though. When you say you used Observe, do you mean Observe.ai?

Sharing this tool Prompteams for Prompt Management (We call it Git for AI)

You can do everything for your AI prompts from storing, versioning (repositories, branches, commits), revert commits, unlimited test cases before commits, and API’s for auto release in production. Basically you can make your own pipeline with Prompteams.

Reduce hallucinations, edge cases, and improve quality of prompts when making changes to a prompt with test cases using combined criterias.
For example you can add a criteria for your test cases to be less than 50 characters for hallucination test cases, or maintain quality with a check for ‘Starts with “As an AI”’, or combine as many criterias as you like

We would love feedback and sharing knowledge. It is still free atm, we are still very early in development

Ah, no! www.observeinc.com – it’s like Splunk or Elastic Search, but better. Kind-of like a real-time Tableau for observability data :slight_smile:

2 Likes

Ah, just posted a similar question and saw this post as well.

@jwatte very cool, I’m effectively doing something similar. Have you found a particular JSON format effective for storing everything you need?

Nah, it’s just “SOME_KEY”: “prompt goes here” and my code knows which key to ask for.
There’s only like 30 prompts so it’s not particularly cumbersome to manage.

1 Like

It would be interesting to have on top of Git for Prompt an eval feature that gives a clear numerical definition of why one prompt is better than another.

1 Like

One thing that would be nice is decoupling prompts from code as well. For rapid iteration/testing.