Trying to call the API to my assistant built with the web interface

I built an assistant using platform.openai.com. I tested it in the playground and work great. Now, I’m trying to call it from postman, and no matter what I do, I always get an “invalid_request_error”. When I read the docs it shows how to create an assistant using the API, how to create a thread and so on. All I want to do is send a string to my assistant and read the response. I know that the API is in “v2” and that I have to add a header “OpenAI-Beta: assistants=v2” yet still use the “old” enpoint (/v1/assistants/{assistant_id}/messages).

And I also setted a header for authorization :
“Bearer : {my secret token}”

I’m POSTING the request as :

{
“input”: {
“type”: “text”,
“text”: “my question…”
}
}

Yet, it always returns :
{
“error”: {
“message”: “Invalid URL (POST /v1/assistants/{assistant_id}/messages)”,
“type”: “invalid_request_error”,
“param”: null,
“code”: null
}
}

I’ve triple checked my assistant ID.

Any ideas ?

Ironically I asked chatGPT, but it doesnt know what to do with v2.

1 Like

I think you need to take a deeper look at the multi-step procedures needed to interact with the assistants API, at “documentation” and “API Reference” links on the sidebar of this forum.

Notably: an assistant doesn’t act as an entity that directly responds to arbitrary (or AI-fabricated) requests.

A conversation is contained in a thread you create.
A Message are placed in the thread you created.
A run is invoked by specifying the assistant and the thread.
Status is polled to see when a response is ready.
Thread tail is retrieved to obtain the response to you.

For sending the actual user question language to a URL at the correct point including the thread ID, this would be a CURL request you can translate to postman, but you can see that you would need procedural and event-driven code to handle all this.

curl https://api.openai.com/v1/threads/thread_abc123/messages \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "OpenAI-Beta: assistants=v2" \
  -d '{
      "role": "user",
      "content": "How does AI work? Explain it in simple terms."
    }'
1 Like

Thanks for the follow up.

Ok, in fact, I read the documentation and what I did was looking in the log window of the playground while I was interacting with my assistant and I saw that it did it in steps, create a thread, then create a message and assign it to the thread and then run it (with or without streaming, with I dont think I fully get). A complete, end to end simple example would have been great, but in the docs, the assistant itself seems to be created wwith the API.

I was wondering if there was “simple” way, kinda like when I can call the completion API.

What I dont get in the docs is where do I use the url for my assistant ? From what I read, I can create a thread at “/v1/threads”, assign the message to the thread at “/v1/threads/{thread-id-obtain above}/messages” and run it (no streaming) at “/v1/threads/{thread-id-obtain above}/runs”

Where would I use the assistant ID that I created in the web UI ?

Thanks for your time, greatly appreciated!

1 Like

Good questions.

The playground lets you access the same resources and storage in your organization as you would access via API calls. An ID of an assistant created there can be used the same as an assistant you create by API call that gives the instructions, model, etc. It can be convenient to make an assistant once through an interface you didn’t have to write yourself.

Where you use the assistant ID is in the run step. You have a thread with a user message at that point. You can send it to a “You are a expert cook with recipes” assistant, or a “a pirate insults the user’s question” assistant for answering by whichever ID is specified.

The convenience of having a conversation in threads that are managed for you, and files to be answered from extracted is indeed somewhat offset by the complexity of a dozen different API methods (like keeping track of and deleting files the user asked about as attachment resulting in a vector store…still keeping track of conversations that belong to particular users, etc.)

Then - provide your own tool that you code, and the status may be that the AI wants to invoke it.

Assistants is a particular niche - not for the novice, not for the expert that would return to chat completions.

1 Like

Thanks a lot, I was just reading… Like… REALLY reading the docs and finally saw it lol!

I guess I’m bad at reading JSON.

Thanks, it’s all clear now!

1 Like

Finally got it. Works great, more convoluted that I was looking for, but per your instructions you did help me a lot. Thanks a million.