New Playground features: Generate in the Playground

Easily generate prompts, function definitions, and structured output schemas in the playground using the Generate button. The generated prompts provide a solid starting point for your projects, serving as an excellent foundation to then build out more specific requirements. Get started in the Playground.

14 Likes


this is working great, thank you for creating this topic!

1 Like

I like it! I noticed that prompts it generates generally have a certain structure to them (list the steps, specify the output format, give examples, etc.). To build this feature, did you basically fine-tune gpt-4o on a bunch of examples of ā€œgoodā€ prompts? Or was there more to it than that?

This is a great tool, I am new to OpenAI and I think I will start from hereā€¦ :+1:

1 Like

It also is missing a quality of ā€œprogrammingā€, with a ā€œyou areā€ and a ā€œyou doā€ attitude in the output; instead thereā€™s often ambiguous examples produced that would be far better as rewarded few-shot if that were to be done. Numbered instructions the ā€œgenerateā€ AI follows donā€™t directly relate to thinking, procedural production, nor style.

The system instruction is written as if it was just a user input, but with no data to act on. Perhaps AI, given some instructions, is not sure if it follows the instructions or implements the instructions.

1 Like

First, some UI concerns before the fault:

  • it is not obvious that the generate AI has no idea of the existing function, schema, or prompt in playground boxes - ā€œimprove the functionā€ commands will result in data loss.
  • multiline input requires shift-enter. Very easy to send prematurely. A send button would be better.

Then:

  • working with functions, the AI is completely oblivious to the need or capability of writing anyOf schemas, in response to a proposal for different types of outputs. Nor does it understand the implications of ā€œstrictā€ or which model it is producing for.
  • invalid function schemas are produced

Example: write a function where the AI has a choice of three schemas, one to report a policy violation, one to report it will await more input because ā€˜userā€™ looks incomplete, and one to disconnect or say goodbye.

Result: a useless strict function that will produce all three keys - and no sub-schemas.

Iteration: A long explanation of the availability of anyOf to contain a nest of unique schemas. Then, pasting the bad function by necessity.

Result: schemas with the same name, producing an error: "Invalid schema: Objects provided via 'anyOf' must not share identical first keys. Consider adding a discriminator key or rearranging the properties to ensure the first key is unique."

ā€œGenerateā€ also hangs to eventually return errors quite often when inputting schemas to improve.


If the AI could make a preliminary minimum API call and take action on the error received from its failed response attempts, this could be avoided:

  • Then the playground still produces ā€œget codeā€ with invalid Python schemas.