We’re winding down the Assistants API beta. It will sunset one year from now, August 26, 2026. We’ve put together a guide to help you migrate to the Responses API: https://platform.openai.com/docs/assistants/migration.
Assistants were our early take on how agents could be built (before reasoning models). In the Responses API announcement, we said we’d follow up on deprecating the Assistants API once Responses reached feature parity — and it now has. Based on your feedback, we’ve folded the best parts of Assistants into Responses, including code interpreter and persistent conversations.
Responses are simpler, and include built-in tools (deep research, MCP, and computer use). With a single call, you can run multi-step workflows across tools and model turns. And with GPT-5, reasoning tokens are preserved between turns.
The Responses API has already overtaken Chat Completions in token activity. It’s our recommended path to integrate with the OpenAI API today, and for the future.
Is it possible to clarify SLA and limits for conversation persistence, the language is not clear.
>> Response objects are saved for 30 days by default. They can be viewed in the dashboardlogs page or retrieved via the API. You can disable this behavior by setting store to false when creating a Response.
Conversation objects and items in them are not subject to the 30 day TTL. Any response attached to a conversation will have its items persisted with no 30 day TTL.
I find “persisted with no 30 day TTL” to be very confusing statement. You always need to say what exact TTL you apply rather than which one you don’t. Can you clarify this point about persistence. I don’t want to try to interpret it…
I read it again, there is also confusion what this “no policy” applies to, there are conversation objects, object items and responses
This is what I received previously: Assistants API:
We’re excited to introduce the beta of our new Assistant API, designed to help you build agent-like experiences in your applications effortlessly. Use cases range from a natural language-based data analysis app, a coding assistant, an AI-powered vacation planner, a voice-controlled DJ, a smart visual canvas—the list goes on
This API enables the creation of purpose-built AI assistants that can follow specific instructions, leverage additional knowledge, and interact with models and tools to perform various tasks.
Assistants have persistent Threads for developers to hand off thread state management to OpenAI and work around context window constraints. They can also use new tools like Code Interpreter, Retrieval, and Function Calling.
Our platform Playground allows you to play with this new API without writing code.
I’ve been building a product around it. And now I’m being told there is a new API. Is the new API a beta as well? How can we trust it will not also experience the same treatment in the near future?
The forthcoming deprecation of Assistant API was announced here back in March, it has been pretty clear for a while that it was going away eventually:
This is the nature of IT unfortunately. You need to plan for maintenance inc. upgrades and migrations between versions of dependencies, not just feature creation and bug fixes.
The migration guide says “Their [Assistants] replacement, prompts, can only be created in the dashboard, where you can version them as you develop your product.”
My application is dynamically creating Assistants. I’m not sure how this is “feature parity”. Is there a API way to create Prompts?
Creation of “Prompts” is in “chat”, AKA, the Playground. It appears like a preset might have before, (presets which is now made non-functional), except that when you save them, you receive a prompt ID, which store the developer instruction and some messages on the left pane along with some of the visible settings (and magical unknowns when trying to use or override parameters, or ultimately, failure when you try to override a reasoning model in a prompt, even if there are no reasoning turns in a conversation or previous response ID.
It is NOT at feature parity, because you can only use the platform UI to build such a prompt. There is no API to create a “prompt” equivalent to what you could do with Assistants, nor is there even an API method to obtain the stored settings, thus you cannot even know how to run a prompt, with the reasoning or “include” options that if are furnished incorrectly, will cause a API error and failure.
It is NOT at feature parity, because Assistants had a working “truncation” method that at least worked for a number of turns to limit the length of a conversation, if not absolutely a budget of what you want to spend. Because of caching and possibility of switching models, there should be an even better budget, a “pointer” of the last message that can be moved forward when it doesn’t already have a broken cache by expiry on any server.
That’s okay, because it is quite easy to see these shortcomings in both Assistants and Responses and their “Prompts”, and disregard all the offerings.
Great update — I like how Responses keeps the best of Assistants while simplifying workflows. The addition of reasoning tokens and persistent conversations should make website integrations much more adaptive. Curious to see if Responses will evolve into even more agent-like capabilities.
Great update! Responses API seems much more streamlined and feature-rich. The migration guide is super helpful can’t wait to try the new tools with GPT-5.
I like the responses API but my favorite thing about the assistants is the single pane of glass I get in the dashboard for all assistants created. It makes it easy to track, and also easy to show others (like the ones who sign my checks) what the assistants do, related info, etc., without showing them python code they won’t understand. Up until now, I can’t find anything referencing whether the dashboard will be updated/ replaced/ or simply removed in favor of full API.
I think that part of the Assistants is now under Chat in the dashboard (and should REALLY be called ‘Prompt’ because that is what is in the API. But it saves everything that you used to save with an Assistant but now with bonus versioning. And you can use a Prompt in Responses and Agents API
Appreciate the assistance and screenshot… that helped me get things sorted. For netter context, I work for a law office implementing AI, so the API is paid for by them. There’s no sharing of the data of other companies. I do have my own API but haven’t branched off on my own yet. Thanks again!
Edit: I’ve confirmed this is the exact replacement UI for what i needed. All of the UI functions i needed are there.
Thanks for the update! It’s great to see that the best parts of the Assistants API (like code interpreter and persistent conversations) are now part of the Responses API. The migration guide is super helpful, and I appreciate that multi-step workflows are much simpler with built-in tools.
Can somebody explain to me, will assistants be deprecated too? I just don’t see how I can replace the Assistants API with this new API. Many of my clients are using assistants, will those assistants be removed or what? Thanks.
It is easy to come to a misunderstanding regarding what this topic is about. After all, OpenAI took a word “Assistant” that was already commonly used, and assigned it to an endpoint to make it impossible to follow what they were talking about, the sister to “GPTs”. You can now send the “assistant” role to “Responses” and save it in your “Prompt”, for an example of the perpetual confusion-making in even the replacement.
Assistants – the API endpoint that has “instructions” assigned to a configuration – has never left “beta”. This forum topic refers to the entire Assistants API platform, the assistant ID that contains settings, the threads with conversations, running threads on an assistant.
It is deprecated, and really, was already deprecated as soon as anybody heard that OpenAI was planning to shut it down. You’ve got under a year until the big off switch gets thrown.