How to teach GPT model the context

Hello,
I need a help with a new design of current implementation which I have.
I’ve got 20 functions and a chat. User puts something on a chat like “Give me applications which I own?”
Then I have a huge prompt which holds the description, parameters of each of these 20 functions so based on the conversation history it knows the context and it picks the right function to execute. Then after execution I am asking what next and if thats a final answer cause there might be multiple functions be executed in order to answer the user. So the GPT model responsibility is to provide the orchestration between the functions and creating json format actions what to execute with what params. That works awesome and it’s able to lead complex conversations it answer the users questions.

The problem:
This application is hard to scale cause everything is in huge prompt. This leads to increasing execution time and I am getting closer to token limit. So the question is how can teach LLM model to know what are the available functions so I do not need to pass them each time to the prompt. It would be great to somehow determine 3-5 functions from the N-functions dataset and run them agains the conversation history so the existing flow will work like a charm.

Any ideas how to approach this?
I don’t think fine tuning will work here cause it will pick the 1 functions but the questions might be complex like: Give me the apps owned by me and then check the CPU utilization for them.

Summary:
I need to somehow pick closest functions for the user intent from the dataset.