Reasoning Models Like OpenAI o1 in the Context of AI Agents

I would love to get feedback from the community on something I have been wondering about…

Some Background: When Buidling generative AI applications, the complexity of the application can be decomposed and handled outside of the LLM; this is more complex, time consuming, but granular inspectability and control is possible.

Then there is the approach of “offloading” functionality to the LLM, where the model takes care of some of the heavy lifting…but this comes with tradeoffs like being more closely coupled to a particular model and vendor. And also at the loss is inspectability, Observability etc.

AI Agents: has a large component of decomposition and reasoning, with models with advanced reasoning, can it be argued that a level of decomposition tasks and reasoning can be offloaded to the model?