I have updated openai_ex, an elixir OpenAI API wrapper / client, with the new features and APIs.
v0.4.2 has supports the Assistants API Beta, DALL-E-3, Text-To-Speech and tool calls in chat completions were added.
It already had full support for streaming chat completions.
The elixir wrapper was written from the get-go to work with livebook (the elixir take on jupyter notebooks). The user guide and all the code samples are livebooks.
I have just released a point version (0.4.1) with documentation for the new API calls, support for DALL-E-3 in the Image endpoint, and a change to the FQN of some modules.
What’s the sequence of calls you should be making to get the assistants/threads use case working? Do you have a recommendation for it’s design? It’s not linear anymore as it requires you to periodically check if a run is complete; also check the results of function calls. Do you suggest using a GenServer for every thread that keeps track of all this state and acts as a bridge between UI ↔ My App (Phoenix) ↔ OpenAI?
@subbu It’s early days yet, and I don’t have firm opinions on the answers to your questions. You might want to ask on the openai api discussion / announcement thread (or start a fresh thread) on elixir forum, where other devs can chime in with their 2 cents.