Persisting context without consuming from the API

I am trying to develop an application where I need GPT to classify some apps based on their description. I already have predefined categories and I want it to fit it into any of these categories.

Is there any way that I could input only the app description without having to input the predefined categories in the prompt every time I make a request (like a persistent context)? I am trying to do this so that I can decrease the size of the prompt and the cost from the API calls…

You could maybe define the categories differently in your call…so making it up instead of Categories: Culinary, Utility, Social Network but instead call it like Categories: C1, C2, C3 Then when you pass all the variables to the front end simply rename them.

Also I can tell you that I have heard rumors on YouTube that costs might be coming down drastically soon so this might be a non-issue for you soon.

edit: sorry if this didn’t accurately answer your question. If you can give more context or show some code I could help more

1 Like

Yeah whatever content is required to define your categories will have to be sent in each request. This is just a limitation of how LLMs work. But like @tventura94 was saying you might at least be able to rename the categories names as “c1”, “c2”, etc so that at least the category names themselves aren’t consuming many tokens.

1 Like

I see. @tventura94 @wclayf thanks for the replies, I think that for the meantime I’ll go with the c1,c2,c3… solution. I hope this will at least bring the costs down a bit.

I just reading about symbol tuning. I’m a newb but seems similar and cool.