Biggest pains with LLM agents (Assistants API, Autogen, etc)

@TonyAIChamp

Speaking from a no code/low code background;

  • Every time I see the terms low/no code I cringe because what that means to a dev and what it means to an actual low/no code person is very different

  • Assistants in Playground is easy enough to use. Putting it into production on a web is the hard part. Charging $40/month for 5 bots is taking the mickey

  • LLMs don’t seem to do any real reasoning and just remix training data. Most of the code related material prior to training cut-off dates was written by devs so when a low code/no code person decides to play, its often a painful decision as the training data includes little to nothing from low/no code types on how to get stuff done.

  • Langchain… painful mainly because last time I used it was Oct/Nov and back then Bard/GPT4 hadn’t been trained on it so didn’t offer much in the way of working solutions. My solution was rather than figure out what a blob is, to rather just copy/paste the underlying libraries into a python script and eventually things worked. This was for basic stuff like the course shows how to transcribe a youtube video so hey, let me transcribe a local Teams recording. At one point uploading the recording to YouTube and transcribing was the less painful option but I think I eventually figured out how to get it to do local files as well.

  • You mention visual interfaces. I first used a webpage builder circa 2005 and I quickly learnt that Microsoft’s UI/UX is good enough that most people can figure out a lot of Word/Excel/Powerpoint by just clicking, seeing what happens and learning by doing. Many UX/UI experiences are like the webpage builders of old

  • Even “no code” sources like Marketplace in Google Cloud can be problematic. Take Stable Diffusion. Marketplace had 2 main “repos”. One was for Automatic1111 and I forget the name of the other. Deploymenty is easy enough…click…click…click…hold on… there aren’t any low-end graphics cards available and the one click install is written for only one card type. Eventually I worked out that if I download the “repo” I could probably edit it and include a list of cards rather than just the one type (T4s I think).
    The other repo deloys via a VM and the env was low and clunky enough that I moved on to something else. In the end was easier to run SD locally and wait 10-30 mins for my images, lol.

1 Like