Hi @tybantarnusa
The issue I have with some of the replies so far is that, from a systems engineering perspective, they are discussing a “single-component” software architectural style where the LLM model is the “one size that fits all” component and you must spend a lot of time and energy to tweak a single component designed to perform task “A” to perform task “B”.
In other words, it is quickly, cheaper, and must easier to add a software mode / processl “in front” of your LLM model which pre-processes all these types of “Hi” (“Type B”) cases.
Before I describe this in more details, let me give you a real world example most of use experience regularly:
When you call or contact most large tech companies as a customer, you normally connect with “front line support”. These folks are not really “experts” and they are hired (with lower costs, lower salaries) to answer all the “simple” questions. Many use a script, a playbook, a manual, or even a chatbot to assist them talk with customers as “front line” support.
When the “front line support” cannot answer the question (or the customer screams at them that they want to talk to someone who actually knows what they are talking about!), they are referred to the next line of human support. The “next line of support” are generally higher paid and have more domain knowledge. There is often an additional “third tier of support” where customers meet specialists who are domain experts, etc. These people are generally “more expensive” and so their interactions with customers are limited.
The same is true for building a software application.
If you @tybantarnusa (or anyone) wants to respond to “Hi” or “Hello” or any other short phrase, or common questions, it is cheaper, faster, etc to have a software module in between the customer and the LLM which does simple keyword matches and returns the reply. The LLM (which costs you money on a per token basis) never sees these prompts (saving money). You are not charged for this either, because you don’t send these “canned relies” to the per-usage LLM (cost savings is good, even if only a few tokens; prompt engineering and fine-tuning is more expensive).
More importantly, you do not need to waste time and energy trying to take a “round peg” and turn it into a “square square peg”. LLMs not designed to perform as simple look-up tables. When you are advised to try to figure out a way to fine-tune or manipulate an LLM to to become a lookup table, you are being advised to turn a “round peg” into a “square peg”. Just because LLM is “cool” or “trendy” it does not mean a software engineer should depend on a “one component fits all” software architecture. Designing a system architecture using multiple components is the “heart” of software development.
So, in the scenario as you are describing @tybantarnusa , you should just create a DB of these “square peg” types of user queries and check for a keyword match before sending the prompt to the LLM process. If there is a match, you reply with the “canned reply” and if not, you can send the query to the LLM. Or you might, by now, realize you can run the query against a DB of more complex text and embedding vectors and return that match before sending the query to the LLM (further reducing costs).
Furthermore, you can also use the above components to pre-filter based on company policies. For example, if you have some keywords which are offensive and against company policy, you can filter them long before the text is sent to the LLM (saving money again).
In summary, let me reiterate that it is a common mistake for people who are not experienced systems programmers or engineers to get caught up in the “one size should fit all” systems architecture. In this case, you are being advised to take a software component which is designed to be language rich and sophisticated in response (an LLM) and turn it into a kind of “idiot component” to manage something which does not require an LLM.
Stated another way @tybantarnusa , most software developers could have created a draft lookup table of common phrases and responses and built that software component in a few hours, less time that it takes to ask the question here in our community and much less time to try too engineer prompts or even fine-tune for such a simple “basic lookup table” response process.
Having a pre-LLM process like this has many benefits. One of which is you can have meetings with your clients to focus on “what should be preprocessed” and “what should be sent to the LLM”. This add another full dimension to the art of building and designing software.
Hope this helps.
If you have further questions on the many benefits of designing a multi-component architecture or how to develop software with the hat of a systems engineer, feel free to ask.
