What if we're thinking about agents wrong?

What if we’re thinking about agents wrong?

What if gpt-4o-mini is finetuned specifically to assign tasks to gpt-4o or 3.5-sonnet? This seems like a much more feasible high level task that smaller models would excel at. While saving larger models for tasks that require complex reasoning, problem solving, tool use, etc.

It’s like having a skilled project manager in a company. They may not be experts in every field, but they excel at understanding tasks and delegating them to the right specialists. This way, the company leverages everyone’s strengths efficiently.

I’d like to hear what other people think on this.