You would pay $0.20 per 1Gigabyte per day or part thereof, so for a 100MB file that would be $0.02 per day per assistant plus whatever token charges for your model of choice going to the API and back from the API.
What is the token cost? I know the per token cost but I’m finding single questions asked of the “GPTs” range from costing fractions of a penny to multiple dollars when using the same file. How do I know the cost in advance?
The first question asked on data the results in little context being found will produce a small amount of tokens used, but a question many questions down the line with complex context retrievals from the dataset you provide may take many tokens to full evaluate as all of the historical data needs to be sent each time to keep the conversational format.
What does it means, all historical data needs to be sent each time? Means if i send 512 MB of data first time with prompt and send subsequent prompts in same context then it will consider 512MB data again and calculate token and cost?
It is not possible to send megabytes of data directly to an OpenAI AI model. Input is placed into a limited context length. This is under 100k words on the largest models before you get an API error.