Biggest difficulty in developing LLM apps

Good point. I’m keeping my eye on smaller LLMs built for function calling for this exact purpose. To me it seems inevitable that we will need to route the query to a number of different agents.

My RAG as of now is very small and basically entails application design, purposes, and functionality. So I just use function calling with enums to accomplish this :man_shrugging:

To me it makes sense to train a much smaller model on user intention classification, and then pass ambiguous/difficult queries to GPT-4 to infer.

Separating logic from GPT seems to be the key here. GPT, LLMs by extension are unstructured query handlers. Function calling brings the best of both words by classifying the intent, and also transforming the query.

1 Like