Model Distillation (including Evals and Stored Completions)

Many of you are already performing distillation on your own, but it is complex. We’re introducing a new Model Distillation workflow. Use outputs from larger models like GPT-4 or o1-preview to train smaller, cost-efficient models that deliver similar performance on specific tasks at a lower cost. This suite includes Stored Completions and Evals, now integrated directly in the OpenAI platform. Get started with our docs.

12 Likes

amazing!! thank you so much for this!

2 Likes

This is brilliant. I had just started the other week the process of collecting o1-preview input-output pairs from more complex tasks for the purpose of fine-tuning a gpt-4o model. The fact that this process can now be streamlined will bring so many efficiencies.

3 Likes

Very useful, keen to test this out, thanks!

3 Likes

Are you guys planning to add the ability to query stored completions? This could eliminate the need for Redis (or other) caches on the side of the developers.

1 Like

Hi @miqdad - I have just gotten started with this. One question that came up: Will you also enable manual selection of stored completions? The filtering capabilities are great but one practical challenge is that I cannot exclude any stored completions that for example have the same metadata such as in cases when the completion did not produce the desired output. So filtering combined with manual selection would be extremely helpful.

Alternatively, if there was a way to delete individual stored completions, that would also help.

Thanks!

This topic was automatically closed after 7 days. New replies are no longer allowed.