I have updated openai_ex, an elixir OpenAI API wrapper / client, with the new features and APIs.
All API endpoints and features (as of May 1, 2024) are supported, including the Assistants API Beta 2 (with streaming Runs), DALL-E-3, Text-To-Speech, the tools support in chat completions, and the streaming version of the chat completion endpoint. Streaming request cancellation is also supported.
Configuration of Finch pools and API base url are supported. 3rd Party (including local) LLMs with an OpenAI proxy, as well as the Azure OpenAI API, are considered legitimate use cases.
The elixir wrapper was written from the get-go to work with livebook (the elixir take on jupyter notebooks). The user guide and all the code samples are livebooks.
I have just released a point version (0.4.1) with documentation for the new API calls, support for DALL-E-3 in the Image endpoint, and a change to the FQN of some modules.
What’s the sequence of calls you should be making to get the assistants/threads use case working? Do you have a recommendation for it’s design? It’s not linear anymore as it requires you to periodically check if a run is complete; also check the results of function calls. Do you suggest using a GenServer for every thread that keeps track of all this state and acts as a bridge between UI ↔ My App (Phoenix) ↔ OpenAI?
@subbu It’s early days yet, and I don’t have firm opinions on the answers to your questions. You might want to ask on the openai api discussion / announcement thread (or start a fresh thread) on elixir forum, where other devs can chime in with their 2 cents.
Shoutout to Github Users @kofron for the portkey PR, @adammokan for the project id PR and @daniellionel01 for filing the issue that revealed the :nxdomain problem.