I have updated
openai_ex, an elixir OpenAI API wrapper / client, with the new features and APIs.
All API endpoints and features (as of Nov 15, 2023) are supported, including the Assistants API Beta, DALL-E-3, Text-To-Speech, the tools support in chat completions, and the streaming version of the chat completion endpoint. Streaming request cancellation is also supported.
Configuration of Finch pools and API base url are supported. 3rd Party (including local) LLMs with an OpenAI proxy, as well as the Azure OpenAI API, are considered legitimate use cases.
The elixir wrapper was written from the get-go to work with livebook (the elixir take on jupyter notebooks). The user guide and all the code samples are livebooks.
- Code - GitHub - restlessronin/openai_ex: Community maintained OpenAI API Elixir client for Livebook
- Docs - OpenaiEx User Guide — openai_ex v0.5.7
- Announcements - https://elixirforum.com/t/openai-ex-openai-api-client-library/55353
I have just released a point version (0.4.1) with documentation for the new API calls, support for DALL-E-3 in the
Image endpoint, and a change to the FQN of some modules.
Still catching up with the new features. I just published v0.4.2
- Added the Text-To-Speech endpoint
- Changed the FQN of the Audio and Image functions to match the equivalent Python functions (and yet, I left the major and minor versions unchanged )
- Updated the docs.
@restlessronin random side question:
Do you know the answer to this:
If I do a GET request for a Thread or Message, how can I know which Assistant the Thread or Message is a part of?
@fra_ab If you look at the API reference for the
message object, the field
assistant_id has the assistant information.
In contrast, the
Thread object does not have a corresponding Assistant object.
What’s the sequence of calls you should be making to get the assistants/threads use case working? Do you have a recommendation for it’s design? It’s not linear anymore as it requires you to periodically check if a run is complete; also check the results of function calls. Do you suggest using a GenServer for every thread that keeps track of all this state and acts as a bridge between UI ↔ My App (Phoenix) ↔ OpenAI?
@subbu It’s early days yet, and I don’t have firm opinions on the answers to your questions. You might want to ask on the openai api discussion / announcement thread (or start a fresh thread) on elixir forum, where other devs can chime in with their 2 cents.
I have just released v0.5.7 with support for Azure OpenAI API.