Welcome to the dev forum @blanca.ai.project24
As @anon22939549 hinted, you can chunk and embed your documents in the same integration you have successfully implemented. It should be able to handle the retrieval of relevant context from the documents to answer the query.
Alternatively you can completely load the document into the API call to gpt-4-turbo-preview
model if the document is small enough to leave context for generating an answer.