Hello there, i would like to ask, when gpt fetches lests say JSON with 300 eintries and each entry contains 5 keys. So its a pretty big JSON. How can i fine tune or optimize the custom GPT so it doesnt forget or halucinate and retrieves the correct information. When i try to ask it to aggregate the data by some key?
Thanks for the response.
1 Like
Sounds like you’re trying to replicate proven SQL technology?
What’s your use case? What are you trying to achieve?
There might be a better way!
That’s a lot of data for an LLM to go through.
1 Like
I know its possible to do through database and the gpt will just insert the params to filter based off of. But i was wondering are there some techniques i could do so the gpt can remember longer context windows like this ?? Or would i need a better model?
Thanks for the response.