What do you think about langchain?

There were posts about the langchain library in this forum earlier, but it hasn’t been mentioned so much recently.

Are you using it? Is it good? Is it still relevant? Has another standard surpassed it?

I used to use it a lot, but stopped. Don’t see much value compared to doing the same oneself.

But it is definitely great for newcomers as quick start in using LLM’s


Only use it when I develop with local LLM.

Thanks for responding. Why only local?

openai-python is already sufficient for development if my application is calling OpenAI’s endpoint.

1 Like

If you were using other sources as well (Not just OpenAI), would you consider using LangChain, or do you think Langchain has nothing to offer?

I am someone who is using multiple LLMs from multiple providers on a platform I wrote in PHP. I began my AI journey at the beginning of this year, but I came with some 20+ years of PHP programming experience. So I’m thinking, “Why do I have to learn another programming language in order to do development with LLMs, all of which have documented APIs?” Best decision ever.

That said, if you don’t have an extensive programming background and want to get into this game, I don’t see a better way than Langchain, with it’s huge Python library.


For me the biggest problem (again, not rejecting all its benefits) was that it’s a high level abstraction which in many case you don’t have control of. In many cases langchain uses its own system prompts that make the stochastic parrot even more stochastic as it adds another layer of uncertainty (and a lot of time wasted figuring out why your prompts don’t work as expected).


Without specific dates, it’s challenging to visualize the timeline you have in mind.

A few months ago, most notably OpenAI DevDay (Nov 6, 2023), OpenAI added new functionality to both the API such as assistants and to ChatGPT such as custom GPTs.

As a result, the discussion topics seem to have shifted towards users experimenting with these new products in comparison to similar offerings like LangChain.

Ya. It’s hard to discuss. I suppose even ChatGPT can’t hold a good discussion with just
“What do you think about langchain?”

Here’s a recent discussion (one of many) responding to a question about using LangChain in production, in the r/LocalLLama forum: Reddit - Dive into anything

Since you asked about possible alternatives, I’ll mention Langroid (a couple of commenters in the above thread mention switching to Langroid from LangChain).

We have a few companies using Langroid in production (contact center management, document-vs-spec matching/scoring) after evaluating others including LC and AutoGen. Langroid is a multi-agent LLM framework from ex-CMU and UW Madison researchers: GitHub - langroid/langroid: Harness LLMs with Multi-Agent Programming.
[To be clear, it does not use LangChain].

We expressly designed this framework to simplify building applications, using an agent-oriented approach from the start. You can define agents with optional tools and vector-db, assign them tasks, and have them collaborate via messages: this is a “conversational programming” paradigm. It works with local/open and remote/proprietary LLMs.

A MultiAgent approach often leads to simpler solutions compared to stuffing too much into a single agent. Langroid has a built-in task orchestrator that seamlessly works for LLM tool/function handling (with retries when LLM deviates), as well as sub-task handoff.

We’ve avoided bloat and excessive abstractions to keep the framework light, stable, and easy to tweak — e.g all of the RAG capabilities are in a single DocChatAgent class, and code written 5 months ago still works. Granted our documentation is lagging behind our features, but there are plenty of tests and examples clarifying usage.

I’ve been personally super-productive building complex multi-agent workflows using this framework for a couple of clients. The demands of these client projects drive feature development in Langroid.

Here’s a Colab quick-start that builds up to a 2-agent system to extract structured info from a document:

I agree with most of the comments here, langchain is good when you’re getting started. Doc isn’t great but has enough breadcrumbs if you’re willing to get into the source code. Prompts are key and you can pull from them as “starter templates”. The model abstraction was appealing to me at the beginning but you need to adjust prompts based on different models so that becomes a little tedious.

The vector database and memory abstraction is useful but takes some langchain specific memory management understanding to make sure the memory follows you.

For example I was using local memory then easily swapped it out for an open source solution, zep (getzep.com) for persistent memory. Also swapping vector databases and other tools is pretty easy.

If you’re new to AI development trolling the document and seeing what tools are out there was really helpful.


I found it much better not to use LangChain for my projects. Without LangChain, my model is much faster, you have access to everything, and you know perfectly how everything works behind the scenes. I think it’s great for starting to learn, but it’s not the most convenient for turning it into a product.