Sounds like a Firefox issue more than an OpenAI one.
While it worked on other browsers, the issue in Firefox stemmed from how it handles sample rates differently. The API expects 24,000 Hz, so once I added a dynamic sample rate I also had to resample the audio to what the API expects to receive.
(post deleted by author)
June:
Now it is being disclosed, and giving unguaranteed situations and timeouts where some of the input context is discounted.
It is extremely passive, to where you could continue not knowing it was in use, and fits repeated chat turns in the applicability. That is in contrast to Gemini, where you can take manual control of storing and making use of pregenerated state of the model.
So there isn’t much to learn, except maybe to run up a system prompt + fixed tools to >1k instead of something just under, and place any alterations as late as possible - in those API calls which you will have a high chance of calling the context again soon.
@elm: you showed one picture re o1 that indicated new features such as system message, structured outputs etc. Is this also part of the release or will those only follow later? I could not see it yet reflected in the docs.
Those will follow later. Word is the goal is before end of year.
@elm since there will be two more DevDays, do you get a sense that there will be further announcements on those days?