Communicating with AI Assistant using Chat API– is it possible? Suggestions?

Hello,

My use case: I want to create a website for users to engage with an unbiased chat Assistant that is made to answers questions specific to a gigantic political document. I want to have it embedded on a public website to use so that it is accessible and people are not required to have an OpenAI account to use it.

My issue: I’ve created the website and chat window that can successfully call OpenAI via Chat API and get a response, but when I change the “model:” from “gpt-4o-mini” to my Assistant ID (this is what is suggested to me by GPT, bc alas I am not an experienced developer) I get an error “the model does not exist or you do not have access to it.”

I can not tell from forum responses or documentation if I can actually use the Chat API to communicate with an Assistant I’ve already built, or if I need to use the Assistant API to do that. From what I’ve read, the Assistant API, with threads, is for more complex problem solving.

  1. Am I missing something by trying to send the Assistant ID in “model:” or should this work if my API key, Assistant ID, and permissions are correct?
  2. Any creative solutions? Maybe I could use Completions, and make sure some of the model directions are embedded in my POST?

I have the gpt-4o-mini Assistant built and works well in my playground Q&A, which is why I am trying to use it. I think I have exhausted Chat GPT as my coding partner on this one, since I keep going in circles testing the API and it insists I can use “model:” for the Assistant ID value or the model of the GPT.

You should pass your assistant id as assistant_id to the API and keep the model field as it was: gpt-4o-mini. Here’s this part in the assistants docs: https://platform.openai.com/docs/assistants/quickstart/step-4-create-a-run

If you need help adding OpenAI assistant to your website, feel free to dm me. I run a startup that simplifies it and well… I spent a lot of time doing it.

2 Likes

thanks Domas! having some success using Chat API and a system message to go with the user prompt in this iteration, so I’ll probably stick with it since I can debug it with the GPT assisting me. this is more of an MVP, I will reach out if I need something more robust!

I did try to leverage the GPT’s knowledge to make the connection using the Assistant API after your note, but it wasn’t able to debug as well and I kind of ran into a endless loop of them telling me to figure it out through documentation, which isn’t ideal for this MVP.

Thanks,
Abbott