Query semantic separation

Hello.
I have a problem in Q&A with context.
For example, when asking “Which are common shareholders between Google and Amazon?”, answer is not correct.
Model is 'gpt-3.5-turbo-16k, because context length is long.
If I use ‘gpt-4’ model, answer is correct, but it’s expensive as you know.
I want to ask if there is a way to separate query.
For example, after retrieving data from query “Which are shareholders of Google?” and “Which are shareholders of Amazon?”, and use retrieved data as context of final query.
Please help me.
Thanks.

So you are giving GPT-3.5 a very long content? May I ask why?

What’s your stack? You take all data from a database and put it into a prompt?

I am asking because it may be way better to store the data in a database where it belongs to.
Then have your user talk to gpt-3.5, take what the user expects from it and then let the model translate it to something the backend understands.

If you have exact data as input, and want an exact answer out, GPT is not the right execution engine.
You might do better if you ask GPT to generate some SQL or some Python code to calculate the response, based on the data you have.