Save knowledge in memory, save time and tokens

If i connect chatgpt api with my folder with 100 pdf text files, every time that i made a query chatgpt will read all files for each answer?, and if the answer is yes, is there ohter way to aproach better this?, or chatgpt can read one time all files and save this “knowledge” in his memory or similar?, with this i can save tokens and time.
Any idea?
Thanks

The AI is “stateless”. This means that you must pass all context required every time you make an API call. As current generation AI stands, there is no way for it to have “memory” of past events.

4 Likes

You can download the documents directly into the playground API (with a limit) and then ask that GPT assistant questions. It will answer off the documents you have saved to that GPT.
You can balance the weight of the 100 documents by segementing them into subjects, download those subject related documents into individual GPT agents in the playground, then use the different APIs which will be triggered in your IDE when you request that specific subject.
This will require you use a warehouse locally that the IDE can access with the subject API keys registered to each GPT.
The IDE oversight AI will need to choose which API key to use based on questions that it receives.
The IDE will have to use a turn based system that allows the APIs to talk to it, then get responses that the oversight AI agent will review, and then combine into a final answer that will be presented to the user.
You can alternateively create a local warehouse and train a local AI like Phi2, or mixtral 8x7b, or MAMBA 2b on the data, then have those agents talk to the API key agent when it asks them questions.
All of these options require your IDE to run in tandum with the API keys running.

2 Likes