Smart way to add pricelist to chat completions

I just added so that chatGPT will have a pricelist in the “system” prompt.

The problem I am having is that I am always sending the whole pricelist and that contains 355 prices and that is racking up my tokens like crazy, 7k tokens per request.

I was thinking about using the function calling, but what I found out when I started to look around was that it does not matter if I put it in the system prompt or in the function calling.

Does anybody have a clever way of doing this where you have a huge price list and only want GPT to look at the price when it needs to instead of always have it in the system prompt?

2 Likes

Can you share an example of the prompt?

What are you trying to do exactly? What’s the workflow?

1 Like

So what I am trying to do is this:

I am trying to cut down on my tokens because right now I have 358 items from a pricelist in the system prompts:

role: "system",
content: `Here is the price list: ${myPriceList}`

The problem I am having is that I don’t want the price list to always be sent to GPT, but only when I need it like a function call.

so if I send the text:


{
    role: "system",
    content: `Here is the price list: ${myPriceList}`,
},
{
    role: "assistant",
    content: "You are an assistant for an e-commerce"
},
{
    role:"user",
    content:"hello"
}

and then the Ai would respond with “hello, how can I help you”

The price list with 358 items will still be in the prompt even if the Ai did not use it.

So the Ai will have all that useless info just racking up tokens.

My mind is going in the direction to use function calling, but I did some digging and it does not matter if I add the price list in a function calling since that will still be counted when I send in the whole request to the GPT.

So my question is if there is a way to make it so that it will only check for the price list when needed without racking up 7.9 k tokens for every request ?

I am burning through 500k token per day on one user.
I did try the gpt-3.5-turbo-0125 but for some reason it does not compute the whole long list. It likes cut of at a point and does not know that the price exist so that is why I am using gpt-4-0125-preview .

I hope that makes sense and grateful for any pointers

1 Like

Okay, so you’re trying to have the LLM give current prices for items?

This is not a good case for the LLM. Better is to use it to see if a customer is talking about an item then do an SQL lookup on a dbase table to grab the price/details/etc.

If I’m misunderstanding, please let us know what you’re trying to accomplish. I think I get it, but just trying to make sure.

Yeah you are correct on what I want to make it do.

the only problem is how would I know what item to search for since that item can be called differently, ex:

iPhone 11
Apple iPhone 11

I would be able to chain GPT request for example:

Have one function calling called getCurrentPriceOnItem
and that will return the item that the user is talking about.

Then I would be able to create another GPT request where I give it the price list and then get the correct price and then pass that response back to the initial GPT.

that is the only way I can make it work, but I wanted to see if there is a better way.

1 Like