Do I need API, or custom WebVersion is OK

I’m not a programmer, but I need to create multiple gpts - each is a task in an algorithm flow for marketing purposes. When I started browsing Fiverr everyone was inclining me to get a custom trained vector databases with LLM with API to ChatGPT, but I think that might not be needed.
Each task is a lesson in .txt file, out of which a person could learn how to perform a specific task with provided data. So I have a .txt with instructions (as a lecture in university) and then I would provide a dataset for this to be processed according to the task. Then I take the output and transfer it to the next GPTs and so on and on - until I get the end result.
The .txt are not large, but the dataset can be big, and the thing is the bigger the dataset the more precise results I would get obviously. So I’m not sure what to do, stick to webGPTs? Or run Assistant APIs and how do I make it with every prompt to query the initial .txt provided.
Should I hire someone or maybe some simple web interface on GitHub or similar should be enough? If I hire, what services should I be looking for? As one guy told me that I need to make my custom vector database with .txt and then just query it (I forgot what chatGPT API has to do with it, but it must since I want the best result possible) and train it, all for a few thousand bucks, but I’m not a professional marketing professional, just yet studying.
Or maybe just regular API will suffice?
Thank you

Because it’s not clear for me if context size is the same between GPT API and Assistant API, maybe the latter is smaller in which case I should stick to GPT API if I need large data processed…

What are your datasets and what do you want the end result to be? What is your goal and what are you trying to create?

The assistants API is really nice once you get a handle on it, you can chain multiple assistants together each with different instructions. You would just feed the data to the first one, then take the output and feed it into the second etc. You can try enabling code interpreter on them and they might create their own functions to process the data based on the instruction(I’m still not too familiar with how this works). Or you can setup each assistant with a function using function calling.

I used to use a vector data base in my previous GPT apps that I have built but it seems they are no longer needed as it’s kind of built in now.

1 Like

Datasets are scrapped forum dialogs. I need GPT to analyse them according to different marketing tasks to substitute those dialogs for real world interviews. This can easily be 100 000 dialogs.

Ok trying to see if I have this right:

  1. You have 100k + dialogs scraped from forums.
  2. You need multiple assistants each with their own instructions to analyze each dialog?
  3. You want each assistant to substitute the dialog based on their instruction?

What would you be doing with the substituted data? This would determine if you need a vector database, or a fine tuned model. I don’t have experience with fine tuning though so hopefully someone else can chime in if that’s the route you need to take.

1 Like
  1. A lot, not yet done, but it’s huge anyway
  2. Analyze each and them in total
  3. To take each dialog as if it’s an interview process

Instead of taking real word interviews with this people, I will need assistants to extrapolate the dataset as if it were an interview. And each person has many dialogues here and there, which is so complicated to take them all in account, that I think it’s a very complicated task.