Switching from Assistants API to Chat Completion?

Here’s one potential way of implementing this. The basic architectural philosophy is to use Assistant, Threads, Messages & Runs AS-IS and use Metadata to drive inputs to chat completion. In this manner, one gets chat completion till Assistant Api matures.

A basic implementation can sub-class of off BaseAssistant, BaseThread, BaseMessage & BaseRun in betaassi/src/openai_session_handler/models at main · icdev2dev/betaassi · GitHub

Class CCAssistant(BaseAssistant):

Class CCThread(BaseThread):

Class CCMessage(BaseMessage):
p_role:Optional[str] = Field(default =“”)

Each variable thus declared uses up one metadata field (out of 15, because each of the Basexxxx uses 1 metadata field).

Then you can use all the messages in the thread through the p-role; intializing the thread’s first message to provide “system” context from the assistant’s instruction by providing the message.role to be “user”; BUT distinguishing the actual role through p_role=“system”

Every time one needs an input from chat completion, the message formation for chat completion is done through iterating over the entire set of messages in order of insertion of messages in thread. A little transformation is required for each message to format according to what is required for chat completion api

When you get an answer from chat completion, you can append a message with role=“user”, p_role=“chatcompletion” (remembering that the message role is restricted to “user”).

That in a nutshell is it.

edit: will open source the implementation soon. But with the above, anyone can get started

2 Likes