Biggest difficulty in developing LLM apps

looks like a downgraded pre-version of RAG Fusion by Adrian R.

Sure, I can give you a video of it working:

Also several sites by computer scientist Ph.D. describing it:

And a post on Langchain LLamindex
If you follow these steps, you will get good results.

People who ask if this will be useful, yes, and it is not possible for a single LLM to do without an IDE, and this is an excellent point. The context windows of LLMs are severally limited due to quadratic tokenization of the input through the transformation stages. We can solve this by having multiple AI agents handle seperate sections of the queries, as stated in ChatDev toscl, AI Jason Agent 3.0 (with build structures) with IDE seperation of Agents, and creation of in- computer warehouses, and then implementation of another type of LLM> an SSM6 version MAMBA, which can handle large seq inputs. in combination with GPT4 Agents,
By breaking all tasks down into smaller blocks, we create a development enviornment where multiple LLMs can handle the tasks of RAG, leading to the ability to create fusion type inferences.

  1. Each LLM is able to create a RAG request.
  2. The requests are then ranked by an oversight LLM
  3. the best of the RAG results are then combined by an LLM
  4. the combined result is presented to another LLM that is handling part of the input values.
  5. the Oversight process is monitored by another LLM which gives tasks.
    Here are the links with programs using these features:
    https://www.youtube.com/watch?v=Zlgkzjndpak&t=230s Chat dev toscl with video graphic interface so you can see the LLMs talking to each other.
    https://www.youtube.com/watch?v=AVInhYBUnKs&t=1s LLM API that creates groups of research agents to prevent losing track of what is going on with warehouses.
    In combination with these technologies and software programming. Handling larger tasks becomes very simple.
1 Like