Your going to be limited by the context length so if the text is small enough you can directly bring in a text file and with the right prompts get answers. For files that exceed the context length you have to chunk the file and store the information into a new database this guy has a good example using pinecone GPT-4 Tutorial: How to Chat With Multiple PDF Files (~1000 pages of Tesla's 10-K Annual Reports) - YouTube. Its also possible to use the API to generate a knowledge graph from the data locally and use that with context pointers to interrogate the data. The final method which I haven’t explored yet is to fine-tune train a model on your particular data set. You would have to look into that more, could be a viable option.