I actually did this.
multiagent_instruction = """
This thread is maintained and operated on by multiple agents. Please take great care following the
latest 'run instructions' provided by each agent.
"""
And then during runs:
messages = run_ai_thread(assistant_id, thread_id, additional_instructions=multiagent_instruction)
You have to be ready for 100 Assistants to work together, you will see the costs.
Scale this project to see if worth the cost to run the instructions each time User desire to do another task, I think you will try to find a better solution.
And when you have a complex problem to solve, that will be hard, even to assemble, probably lose information, the context will be a limit.
If OpenAI donāt block me like they did with GPT-4 Limits and tons of errors, I could share a demo with Assistants using functions working with local data, working together to solve complex problemsā¦ Today I had 6 times breaks of 1h50 min (Usage Capā¦ probably on llama3 or Gemini i will not have this problem anymore)
1 Like
For me its not really about that scale but rather divide the problem into lets say in the 5-10 range.
And the depth will probably never pass 2 (1 route agent ā 5-9 specialist agents).
But I need to think about that routa agent. Now using GPT4-* agents lazily. Introduces both cost and latency to the overall response times.
My route agent isnāt all that complicated so probably should try a smaller opensource model. Have my 50 test cases so actually looking at making my backend pluggable to allow mix and match of agents.
Bro frippe75, when you launch your demo, tag me or give me a PM so I can see your work. You vision and mechanism made me curious and I wish to see the results.
sure thing @razvan.i.savin. Just need to get to demo time .