Responses API Context Management

Hi, I’m migrating Chat completions API to responses API, and I have a question about context management for ReasoningItem.

According to this doc about Reasoning model (Controlling costs section), it says “If you’re managing context manually across model turns, you can discard older reasoning items unless you’re responding to a function call, in which case you must include all reasoning items between the function call and the last user message.”

However, in the cookbook about reasoning model in responses api, it says even if we send all previous reasoning items, those won’t be shown to the model. “reasoning items from this older turn aren’t shown to the model, even if developers send them in their API requests”.

So can we just pass all the reasoning items from the previous turn and expect the model to not see it? Would that mean our token cost will be saved automatically? Thank you so much!