Thanks that’s helps a lot. Tier 4… This won’t do! Take my money please
Have a good day and thanks again
Thanks that’s helps a lot. Tier 4… This won’t do! Take my money please
Have a good day and thanks again
If you wish to discuss a response in detail, please create a new thread with a link to the reply in the body using the icon to save filling the AMA thread.
Yes! It is possible to do supervised fine-tuning and then run preference fine-tuning. We have seen good results from running SFT first and then DPO after.
We are actively working on a guide around how to approach reinforcement fine-tuning, and will publish it once it’s ready! But there are still some details we wanted to work out first.
At a high level though, I would say to keep several things in mind:
More to come here soon!
The microcontroller was an ESP32-S3.
@michellep Is structured outputs coming to o1mini API soon?
We’ve heard a lot of demand for search and browse capabilities in the API – especially from folks who want to build applications that are grounded in super recent information. We’re actively working on this and expect to launch something next year!
When will we be able to fine-tune voice models?
Is this something in the works or on the roadmap?
Hey OAI great release today!
For O1, is there a roadmap to achieve in-COT function calling to supplement the reasoning context? Is this more of an Assistant update or is this the super secret Agents project?
We are always working on decreasing the prices of our models – so I would expect more here over time as well! I would also recommend trying the new 4o-mini realtime model we launched today, it’s quite a bit cheaper too.
Thanks for the GO and Java SDK’s. Any updates for Official Open AI PHP SDK?
Yes, you should be confident building with client.beta.chat.completions.parse
for Pydantic Structured Outputs!
Also yes, we are working on bringing structured outputs (and also other useful API features) to all reasoning models, including the o-mini series. Stay tuned.
If you’re referring to the “generate” feature that auto-generates a json schema for you – this is just a simple call to our chat completions API with a specific prompt! You can actually see the prompt we use in our documentation: https://platform.openai.com/docs/guides/prompt-generation#meta-schemas
Congrats! that’s amazing, I was super happy with the chart that was shown today about o1’s structured outputs performance!!
Today, the best way is with a tool call that you use to trigger o1 (probably using the new out of band conversation feature). We’ll keep investing in making it easier to use more intelligence within the Realtime API
Re video. Stay tuned it’ll come next year. The model has been trained for this capability so in the mean time, you can experiment with it in ChatGPT
There is any chance to get the audio I/O capability in assistant api?
Will be very useful to get the possibility to send audio to specific assistant and get audio+text as output. Realtime capability will be great too, ma the basic multimodal will simplify a lot.
Any new tools like Neuron Viewer?
Any chances Neuron Viewer can be updated with a newer model?
Why realtime api sounds so horrible in other languages than english against the audio to audio advanced voice mode in chatgpt app?
Nothing to share just yet. We focused the first pass of prompt caching on making it as easy as possible to use (no API changes needed, no extra cost for cache writes). It’s a feature we care a lot about though … would be curious how you’d want to use a more structured cache?