Biggest pains with LLM agents (Assistants API, Autogen, etc)

Thank you for a detailed answer, Tim and welcome to the forum!

Could you elaborate?

I agree, but that’s the thing. It is not meant yet for production. It is still in beta.

To my understanding one of the main things agents were made to solve is the fact, that LLM generally gives better answers after delving deeper into the question. That is why we’ve seen chain-of-though working so well. And the agents were basically automation for things like chain of thought. Now we can see the limitations of this approach, but at the time they proved to show great improvement in communication with the models (though highly inefficient in many cases).

We are steering away a bit from the topic, but overall to your point of no-code/low-code. I personally don’t believe in such solutions early on in any field. I mean, they may be created, but they won’t be as popular and widespread as say no-code website builders now, simply because there is not yet enough expertise on the market to move to that stage.