How to prevent interacting back to user?

I’m building a bot that can only answer. But every time users say “hi”, the bot will respond like “Hi Thoyib! How are you doing today?”. I want it to only respond with “Hi!”.

Also the user chat is formatted “username: chat”, for example when I say hi then the I will send “Thoyib: hi”. The bot will answer with “Bot: Hi Thoyib! How are you doing today?”. Can I prevent the bot to say the username unless they are asked to do it by the user?

And sometimes the bot keep greeting the user before it gives the answer. I want it to just give the answer without interacting back to the user.
For example: “Thoyib: what is a planet?”, “Bot: Hi, Thoyib. A planet is a celestial body that orbits a star, is spherical in shape due to its own gravity, and has cleared its orbit of other debris.”
I want the bot to omit the “Hi, Thoyib.”

I’ve tried:

  • You cannot ask anything to chat.
  • You will not ask back to user.
  • DON’T say their username in your response.

Nothing seems work. Thank you in advance.

2 Likes

Negative instructions are notoriously bad in prompt design right now.

What are your settings? What model are you using?

If you just want it to say “Hi” 100% of the time no matter what, I would just hard code that response or start with “Hi…”

1 Like

Here is my current settings.
model=“gpt-3.5-turbo”
temperature=0.9
top_p=0.8

Beside that all default. And yes I notice I cannot get the response I want when I use negative instructions.

1 Like

So, flip it around… in system, do something like…

You will initially greet the user with “Hi!” by itself, then respond to their questions…

I would honesty just start the convo with the bot saying “Hi!” (hard coded by you since it’s the first message…) Then let the person reply to the “Hi!” message from the bot. Does that make sense?

That could be another problem. I would only set one or the other (usually temp)… There are some edge cases where using both works well, but generally, you want to go one way or the other. Try setting top_p back to 1 then play with the temperature at 0.05 increments to see if you can get it to do what you want.

Good luck! Please come back after you’ve run some tests and let us know.

3 Likes

Before, I only set the temperature, then after I changed the top_p to lower value somehow makes the answer better, maybe I will try to set the temp to default.

Thank you for the suggestion. I will try to flip the negative instruction.

2 Likes

Update:

Changing: “You cannot ask anything to user.” to " Respond without asking questions." actually makes the response better. The bot now stop saying “How are you doing today?” and now saying “Welcome.” Which is okay for me.

I will keep this in mind from now on. Thank you. :smiley:

2 Likes

You can use the system and assistant roles to steer your model using advanced prompt design.

Here is an example of how to achieve that:

1 Like

Woah, thank you for the resource. It gives me some insight how to improve my prompt. It can also save some tokens since I only need to include relevant information instead of dumping all the info into the prompt.

2 Likes

Hi @tybantarnusa

The issue I have with some of the replies so far is that, from a systems engineering perspective, they are discussing a “single-component” software architectural style where the LLM model is the “one size that fits all” component and you must spend a lot of time and energy to tweak a single component designed to perform task “A” to perform task “B”.

In other words, it is quickly, cheaper, and must easier to add a software mode / processl “in front” of your LLM model which pre-processes all these types of “Hi” (“Type B”) cases.

Before I describe this in more details, let me give you a real world example most of use experience regularly:

When you call or contact most large tech companies as a customer, you normally connect with “front line support”. These folks are not really “experts” and they are hired (with lower costs, lower salaries) to answer all the “simple” questions. Many use a script, a playbook, a manual, or even a chatbot to assist them talk with customers as “front line” support.

When the “front line support” cannot answer the question (or the customer screams at them that they want to talk to someone who actually knows what they are talking about!), they are referred to the next line of human support. The “next line of support” are generally higher paid and have more domain knowledge. There is often an additional “third tier of support” where customers meet specialists who are domain experts, etc. These people are generally “more expensive” and so their interactions with customers are limited.

The same is true for building a software application.

If you @tybantarnusa (or anyone) wants to respond to “Hi” or “Hello” or any other short phrase, or common questions, it is cheaper, faster, etc to have a software module in between the customer and the LLM which does simple keyword matches and returns the reply. The LLM (which costs you money on a per token basis) never sees these prompts (saving money). You are not charged for this either, because you don’t send these “canned relies” to the per-usage LLM (cost savings is good, even if only a few tokens; prompt engineering and fine-tuning is more expensive).

More importantly, you do not need to waste time and energy trying to take a “round peg” and turn it into a “square square peg”. LLMs not designed to perform as simple look-up tables. When you are advised to try to figure out a way to fine-tune or manipulate an LLM to to become a lookup table, you are being advised to turn a “round peg” into a “square peg”. Just because LLM is “cool” or “trendy” it does not mean a software engineer should depend on a “one component fits all” software architecture. Designing a system architecture using multiple components is the “heart” of software development.

So, in the scenario as you are describing @tybantarnusa , you should just create a DB of these “square peg” types of user queries and check for a keyword match before sending the prompt to the LLM process. If there is a match, you reply with the “canned reply” and if not, you can send the query to the LLM. Or you might, by now, realize you can run the query against a DB of more complex text and embedding vectors and return that match before sending the query to the LLM (further reducing costs).

Furthermore, you can also use the above components to pre-filter based on company policies. For example, if you have some keywords which are offensive and against company policy, you can filter them long before the text is sent to the LLM (saving money again).

In summary, let me reiterate that it is a common mistake for people who are not experienced systems programmers or engineers to get caught up in the “one size should fit all” systems architecture. In this case, you are being advised to take a software component which is designed to be language rich and sophisticated in response (an LLM) and turn it into a kind of “idiot component” to manage something which does not require an LLM.

Stated another way @tybantarnusa , most software developers could have created a draft lookup table of common phrases and responses and built that software component in a few hours, less time that it takes to ask the question here in our community and much less time to try too engineer prompts or even fine-tune for such a simple “basic lookup table” response process.

Having a pre-LLM process like this has many benefits. One of which is you can have meetings with your clients to focus on “what should be preprocessed” and “what should be sent to the LLM”. This add another full dimension to the art of building and designing software.

Hope this helps.

If you have further questions on the many benefits of designing a multi-component architecture or how to develop software with the hat of a systems engineer, feel free to ask.

:slight_smile:

I’m glad to help.
I’m also working on other resources to optimize the prompts that I’ll soon share in this topic.

2 Likes