Chat Completion API with reasoning Models

Hey
I understand that leveraging the Responses API with reasoning models can unlock higher intelligence.
However, it is not trivial to migrate our app to the responses API.

I still want to quantify the benefit of the reasoning model with our current app in the best way possible (ie with the chat completion API) before migrating to the new API.

Is there a way to get the reasoning tokens (or id) and pass it back as part of the msg list with the completion API, or this can be done only with the Responses API?
Assume:

  • we handle the msg list ourself
  • The task is done by an agentic that use tools one after the other with no user feedback, so as I understand, if I am using the Reasoning API, the reasoning tokens are not discarded between tool calls.

Thanks

At this moment, I believe it is not possible.

Keeping reasoning items in context

In the Chat Completions API, the model’s reasoning is discarded after every API request. While this doesn’t impact the model’s performance in most cases, there are some complex agentic tasks involving the use of multiple function calls that see greater intelligence and high token efficiency when reasoning items are retained in context. It is only possible to retain reasoning items in context using the stateful Responses API, with the store parameter set to true.