Did the model distillation feature disapper from openai platform?

I remember recently seeing an option to create distilled models on the OpenAI platform, but I can’t seem to find it today. Has that feature been deprecated? Could someone help clarify this?

Ok, seriously, where is the model distill feature? It’s just gone. Am I losing my mind?

The entry point is dashboard → logs → chat completions

One has to allow logging, and then use chat completions with the “store” parameter, persisting calls for 30 days.

Then you’d need to follow the pattern they show, where evals is encouraged first.

https://openai.com/index/api-model-distillation/
https://platform.openai.com/docs/guides/distillation

The videos are all dead, so the feature may have gone by the wayside.

I have not engaged with this, simply because one primary use of fine tuning is to transform the quality obtained by system message prompting into an automatic behavior through examples, not just large-to-small. The distillation platform does not provide the content control needed to develop fine-tuning sets like would actually be run with an application.

That’s what I figured but I do wish there was more we could do with all the logged messages. I could theoretically have a database of hundreds of examples for finetuning if only they allowed me to download them as a csv file or ideally as a jsonl.
I mean, it’s not like I can’t do it with a bit of tweaking too but the extra hassle when you’re talking about hundreds of prompts is just a bummer for no reason

I’ve the API call logging enabled and I don’t see the Distill option. Anybody know why it’s gone?