As we work more with o3-mini
and future reasoners, in comparison to gpt-4o
and others like it, are the best practices for prompting reasoners (as found in the documentation) being considered when generating prompts in the Playground?
The reason I ask is that I note in the documentation on Advice Prompting, it mentions:
Try zero shot first, then few shot if needed: Reasoning models often don’t need few-shot examples to produce good results, so try to write prompts without examples first.
Yet, when using the Generate feature in the Playground it will often run counter to this, and put a couple examples in the generated prompt.
Is there a chance this could be updated to adjust according to the selected model?