AMA on the 17th of December with OpenAI's API Team: Post Your Questions Here

Ask hypothetical question now, don’t know what you’re asking about until tomorrow? Clever twist.

“…so after that announcement, how long until O1-preview shutoff, if so?”

“…a parameter to tune reasoning length from minimum (for continued chat quality and speed) up to Pro 0-shot performance (that might confuse who even asked?)”. [answer in stream: yes, but not up to O1 Pro level, reserved for a future model]

“inter-call ID reference for continuing on prior reasoning context we can’t supply via API but paid for?”

“…with that additional context length, will context caching see further competitive discounts?”

Or what can be known:

On the {n} day of Chanukah, my true love gave to me:

  • how about complete developer customization of the Assistants file search tool text, the internal instructions of the tool, to accommodate the application: to match the provider and authority of the files (user or developer), to inform what will be found behind the search, to instruct HyDE queries of high semantic quality for the data, and da kine.
  • and working max_num_results Assistants run parameter
  • Why don’t vector store results get file names for chunks (like ChatGPT), only IDs? Thus, the user or organization could simply discuss the file upload source.
4 Likes