Promptotype- Develop, test, and monitor your LLM structured tasks

Hi community, I’m happy to officially introduce Promptotype!

Promptotype is a platform for { structured } prompt engineering: a prompt development and testing platform that focuses on the specific use-case of task oriented, structured prompts- resulting in either function calling or JSON schema response.

Key features

Extended playground:

  • Extended playground for developing and testing templated prompts.

  • Test your prompts on multiple inputs (with their expected response) at once.

  • Playground for function (tool) calling.

  • Support for ‘structured outputs’ response format and schema definition.

Platform:

  • Schedule periodic test runs on selected datasets, to ensure your prompt’s ongoing performance and resilience to model changes.

  • Manage a library of prompt, model configurations, sample inputs (with expected value or schema outputs), and test datasets.

  • Fine tune models directly from collection library.

  • Follow up on your work with a full history of runs and tests.

You’re all welcome to try it out! We offer a free tier that includes access to the playground.

Feedback appreciated!