Assistants API for over a year now in BETA. Are the Assistants API dead-end? is it just proof of technology?
Thanks for this AMA! Is there any word on when assistants code interpreter tool will be out of preview/beta?
Will SORA have an API, if yes When?
Will the Desktop app ever get features similar to anthropic MCP servers? Will the desktop app ever get function calling?
When are you going to release the gpt4o 64k output model?
How difficult was the interview process for getting a job at OpenAI in your opinion compared to other places you’ve interviewed in the past?
When will you be including search capabilities in the API.
I feel like the Assistants API has been forgotten. Any news?
It’s the most powerful tool for devs!
Is there any plans around in house robotic projects that you guys working on? and if so, are you able to share anything about it?
What is the possibilities for sending a larger context to the advanced voice realtime api to start it off for a specific purpose?
can preference tuning be used to encourage longer answers (say 10+ pages) and better use of output context ?
Where can I buy the Reindeer that speaks?
Can there be an easier way of doing continuation (with structured outputs and streaming)? Right now it’s rather difficult to stitch 2 assistant responses appropriately
I had a look at the docs for the new ‘developer’ message in the chat completions. I’m not quite clear on if you handle the system vs. developer message at your end, or if it’s something we as developers still need to handle via if 'o1' in model_name: do this else: do that
kinda thing? And a follow-up if I may - there was a mention of work you’d been doing on the relative ‘weighting’ of system vs developer vs user messages - is that documented? I’d be really interested in digging in if it was! Thanks!
P.S No PHP SDK. The unloved and yet massively in-production step-child of the developer world. Santa? Can you hear me?..
Any plans for a 3d assets multi model?
How about an LLM for humanoid animations?
What about SVG (think of adobe turn tables here)?
How will this work alongside agents?
Hello, i’m new. watched an interesting test using a simple 3 wheeled robot with vision and audio capabilities exploring a room. In that video and also in my interactions in chat with GPT, i noticed that the bot forgets things. There is some limited memory option, but i often see it not work, and insted i would want to see some method where we can store some important aspects in the “memory” of the bot/session.
Is there any API that could help push ideas to thie memory, manage space allocation for it ?
a possibility would be for GPT to make a function call to get a memory refresh…
If I can control how long/hard o1 thinks in the API, what’s the difference with o1-pro (the upcoming API version), which is designed to think longer/harder?
On structured outputs,
- Should we be confident building products with client.beta.chat.completions.parse for Pydantic structured outputs? Or go the JSON schema route?
- Will o1-mini API soon be able to do structured outputs?
How do you guys deal with overwhelming things? (such as the amount of questions you are and will be getting here)
How many hours do you usually work at OpenAI? Do you guys have strict rules on working just 8h or if someone wants to work 10-16h, do you guys allow that person to do so because they are so passionate about what they are working on?