As I understand it, the GPT-4 model used for Custom GPTs has a 32K token context. Some of that is used up by OpenAI itself telling a GPT how to act and then my custom instructions, which are only about 500 tokens.
When called, my API will return data for the GPT and user to ask questions about. I am not using file upload because the data gets updated regularly.
If I want the user to be able to have a reasonable length “talk” with the data, that lasts a dozen or so turns (or until the user hits their cap, haha), how much data is reasonable to return from my API. Right now, I am looking at around 16K tokens worth. Can I go higher, should I go lower? I can break the data up into smaller chunks, but I’d like to have a good maximum that is “safe”. Thanks!