Responses API Access To GPTs

I’ve held off developing a programmatic interface that calls my Assistant because the Responses API is supposed to replace the Assistant API (or so I understand). It’s not clear to me how I can access my Assistant with the Responses API. Is this intended to be supported? I actually created an Assistant because I wanted to access a custom GPT programmatically, and the Assistant has the same crafted capability as my custom GPT. So it’s not clear to me how to proceed. I read in the forum that users are getting different results from direct GPT access versus the Assistant API. I hope the Responses API mimics the same results as the GPT when access directly. I read the Responses API docs but it’s not clear to me how the Responses API can access an Assistant. So what am I after; I want to access a custom GPT (or an incarnation of same with an Assistant) programmatically. Without using the Assistant API, can I do that with the Responses API?

You’ll need to supply input messages and available tools with each new thread. You can use previous_response_id to build on an existing thread for multi-turn conversations.

I recommend reading the docs: https://platform.openai.com/docs/api-reference/responses

So instead of supplying an Assistant id for a custom crafted Assistant, I need to build up the conversation to the same level as the Assistant and essentially get the same result when the conversation reaches that level?

The Assistants API isn’t technically deprecated yet, so you can always go back to it if you’re struggling with Responses. Otherwise, I would really recommend reading the documentation.

1 Like

Nothing about a ChatGPT GPT is “conversations”. Nothing is actually learned by any chats with it, either by chats by users or developer chats with the GPT builder. The GPT builder AI just writes the contents of an instruction box for you, the same one that you can see.

If you are discussing replicating the “instructions” box of a ChatGPT GPT (which indeed is only powered by the instructions input box you see, and runs only the ChatGPT version of GPT-4o), or the “instructions” field of Assistants, where you can select a different Assistant and its instructions to run a thread on, the responses has a similar mechanism:

instructions parameter with an API call.

This is injected as the first system message anytime you use it. That happens whether you are sending an “input” of a chatbot’s user and assistant exchanges, or if you are using the response id of a server-stored conversation reply for chat persistence.

The instructions field is indeed added on, you’ll have that message, and can also have any system message that was in messages. That makes for something that you can add, remove, and change on a server-side chat where the first message is out of your control with a reuse of a previous response ID.

So: just treat the instructions API parameter as your GPT or Assistants instructions. Don’t put “system” in the messages.

You ultimately get the same thing as just building your chatbot with chat completions and sending exactly what you want with every recurring API request.

1 Like

I see, thank you. I’m going to build up a prototype and experiment with the Responses API and the instructions option.

1 Like

Thank you for the code excerpt. And I agree on the Assistant API; I was getting used to thinking about threads and the Assistants API then the Responses API came out. I’ve a prototype up and running with the Responses API, and I’ve a few questions:

  1. If the previous_response_id is used for subsequent Responses API calls, do the instructions need to be supplied or does the Responses API side already know the instructions context because of a valid previous_response_id?
  2. It appears as though Response objects are saved for 30 days by default. If a previous_response_id is used past the 30 day window, will the Responses API silently ignore the expired previous_response_id or will it return an error?

If you are making your new Responses-based Assistant, with the flexibility of being able to change the answering assistant mid-course, then you would use the instructions parameter, and need to continue supplying that in every API call.

If instead, you want the chat to just be permanently based on one instruction, you can pass that as “system” role as the first message in the first call that starts a session. Then that saves a bit of network bandwidth for each follow-up call, as it becomes part of the reuse of input by response id.

The response IDs are not currently being deleted, since the start of rollout two months ago. It is a minimum of 30 days. If the most recent response ID that you are trying to employ is expired or deleted, you cannot followup, as the chat history is gone.

1 Like