`openai_ex`: elixir client with latest APIs

I have updated openai_ex, an elixir OpenAI API wrapper / client, with the new features and APIs.

All API endpoints and features (as of May 1, 2024) are supported, including the Assistants API Beta 2 (with streaming Runs), DALL-E-3, Text-To-Speech, the tools support in chat completions, and the streaming version of the chat completion endpoint. Streaming request cancellation is also supported.

Configuration of Finch pools and API base url are supported. 3rd Party (including local) LLMs with an OpenAI proxy, as well as the Azure OpenAI API, are considered legitimate use cases.

The elixir wrapper was written from the get-go to work with livebook (the elixir take on jupyter notebooks). The user guide and all the code samples are livebooks.

  1. Code - GitHub - restlessronin/openai_ex: Community maintained OpenAI API Elixir client for Livebook
  2. Docs - OpenaiEx User Guide — openai_ex v0.6.3
  3. Announcements - https://elixirforum.com/t/openai-ex-openai-api-client-library/55353
3 Likes

I have just released a point version (0.4.1) with documentation for the new API calls, support for DALL-E-3 in the Image endpoint, and a change to the FQN of some modules.

1 Like

Still catching up with the new features. I just published v0.4.2

  1. Added the Text-To-Speech endpoint
  2. Changed the FQN of the Audio and Image functions to match the equivalent Python functions (and yet, I left the major and minor versions unchanged :frowning: )
  3. Updated the docs.
1 Like

@restlessronin random side question:

Do you know the answer to this:

If I do a GET request for a Thread or Message, how can I know which Assistant the Thread or Message is a part of?

@fra_ab If you look at the API reference for the message object, the field assistant_id has the assistant information.

In contrast, the Thread object does not have a corresponding Assistant object.

What’s the sequence of calls you should be making to get the assistants/threads use case working? Do you have a recommendation for it’s design? It’s not linear anymore as it requires you to periodically check if a run is complete; also check the results of function calls. Do you suggest using a GenServer for every thread that keeps track of all this state and acts as a bridge between UI ↔ My App (Phoenix) ↔ OpenAI?

@subbu It’s early days yet, and I don’t have firm opinions on the answers to your questions. You might want to ask on the openai api discussion / announcement thread (or start a fresh thread) on elixir forum, where other devs can chime in with their 2 cents.

But it only has the assistant_id if the assistant is the one that generated that message, if the user did then assistant_id is null

Check out this: How to know which Thread/Message belongs to which Assistant? - #8 by fra_ab

Also @subbu check this out: GitHub - dvcrn/chatgpt-ui: ChatGPT UI with auth, targeted towards business-ey usecases, written in Elixir + LiveView

1 Like

I have just released v0.5.7 with support for Azure OpenAI API.

Release v0.6.3 with support for Assistants API Beta 2 and streaming Run execution.

Released v0.6.4 with fix for the “bug” described here

I have published v0.8.3 to bring back the completions API which is still being used by 3rd party API providers.

Shoutout to github user @kalocide for bringing the issue to my attention and for the PR.

I have just released v0.8.4 to

  1. add Portkey support
  2. add OpenAI project id for legacy keys
  3. explicitly handle :nxdomain finch errors

Shoutout to Github Users @kofron for the portkey PR, @adammokan for the project id PR and @daniellionel01 for filing the issue that revealed the :nxdomain problem.

1 Like