Getting GPT-4o TO analyse docs with large blocks of text

Hello everyone! Is it possible to upload a text document (of say 7000 words) to GPT-4.0 and then have several prompts analyse the same large body of text without having to re-upload the text for each prompt? If so, which end points should I use (I am working in python.) Thanks v much in advance for any help that anyone can offer. Mike

yes, you can use the create_thread_run and upload the file once and link in each thread, them you run the thread against the assistants you have configured

Many many thanks. I will look into that (I am not very experienced with Python) So does this look like I am taking the wrong approach? the initial_context would be a large amount of text: import sys
import os
from openai import OpenAI
client = OpenAI()

class OpenAIHelper:
def init(self):
self.api_key = os.getenv(‘OPENAI_API_KEY’)
client.api_key = self.api_key

def query_with_context(self, initial_context, prompt):
completion = client.chat.completions.create(
model=“gpt-4o”,
messages=[
{“role”: “system”, “content”: “You are a knowledgeable assistant. Please answer the questions based on the provided text.”},
{“role”: “user”, “content”: f"Given the previous text: {initial_context}\n{prompt}"}
],
temperature=0.5
)
return completion.choices[0].message[‘content’]

def analyze_text(self, text, prompts):
results = {}
for prompt_name, prompt_text in prompts.items():
results[prompt_name] = self.query_with_context(text, prompt_text)
return results

If you have access to APIs, your use case would benefit from creating assistants and using file search. That would be the easy approach, check this guide out by OpenAI and lmk if you have any questions. Cheers!
https://platform.openai.com/docs/assistants/tools/file-search

Thanks very much indeed Munna. And for your speedy message. I will def check that approach out. best. Mike

Wouldn’t the assistant read any docs from a vector store it is attached to?