Determining what a user is asking about from a numbered list

I have topics in my database that are available to be asked about by users. In my initial prompt I provide them as a numbered list in id: 1, name: exampleName format, along with the question from the user. I want to use the model to determine what topic the user is likely asking about. However, I have run into issues when the topic names contain numbers.

For example, let’s say the list contained an item with id: 1 and name: “Process-2”. When I ask it to tell me about Process 2, it returns the topic that has the id of 2 instead of the one with the name Process 2.

I am using the functions API and am delimiting the user question with triple quotes, I have tried engineering the prompt every which way but I cannot get it to act consistently. I’m sure this is a trivial use case and I am just being stupid, but any help would be appreciated!

Hi, welcome to the forums!

Do you wanna share your prompt?

One strategy is to minimize confusion. Instead of a numbered list, an option is to use random or pseudorandom codes that you can just remap.

Of course there are a billion ways to do this, but it’s certainly doable!

Proompt

You are an operator bot of a call center. Your job is to indentify what the client would like to talk about, and respond with the topic code so that a downstream system can process the request.

Here are the topics:

88d73: Sigma9
a51f1: Process-3
2124: Process-2
1321d: OmegaStar
098s8: UTF-6

If none of these topics are relevant, return the default code:

97731: default.

Write only the topic code as a response, otherwise the downstream system cannot parse your output.

2 Likes

You might want to include one or two examples in your prompt, and you can make one of them include the number and show that you are not asking for prompt id 1234 but rather subject “prompt 1234”. I have personally found using an example of situation where it gets confused to be an effective mitigant.

1 Like

I am seeing success with this prompt in the playground but when I use it in my code I am still getting incorrect responses. For additional context I am using using an Azure openai instance with gpt-35-turbo.

In your prompt you should add examples. Your example should say if the user enters What is the longest case in process 3 and then show it should return a79Uv and outline a few others. I don’t think you need to make your ID complex the way you did. You just need to give it examples and it will then follow suit correctly.

I found this prompt produced the results I think you seek:

I have the following processes.

ID = 1, Process = my process, Description = “this process is about X”
ID = 2, Process = Process 3, Description = “this process is about Y”
ID = 3, Process = jen, Description = “this process is about Z”

the user ask a question by process name and you need to respond with the process 1
(for example, if they ask about My Process, you should answer 1, and if they ask about process 3, you should respond with 2,
you should only respond with the process id, no additional text, just one number)

User Prompt: what is Process 3?

1 Like

I tried this and it works beautifully in the playground but not in my code. I wonder if it’s because I’m using a less advanced model? Or if it’s something with the Azure instance? I am including these instructions in the system prompt with the user question being all that I pass to the user content. I will try modifying my function description next to see if that makes it better.

I tried this prompt on GPT 3.5 and it also seemed to work. I don’t have access to the azure model’s anymore, in the past I did find that they sometimes seemed to be tuned differently. Not sure if you are working for yourself or are in a company. If you are working in a company its possible the 3.5 turbo you are using was fine-tuned a little and that’s making this difficult for you (I’ve been there myself). Can you ask them to publish a plain version of 3.5 turbo?

Do you wanna share your code?

There shouldn’t really be a difference between playground and code.

In azure AI studio you also have a playground. make sure you’re using the same model.

It has been a while since I updated but I have it working consistently now. A combination of non-integer ID values, describing the format of the listed processes, and giving it an example seemed to do the trick.

1 Like