after a couple of hours reading the developer docs of openai I still can’t really answer if I can fulfill this use case with openai tooling:
I want a custom GPT which I can feed lots of tokens loosely structured context, like a knowledge base on top of the chatgpt model.
What I did manage to do is to dump a ton of context in a conversation by doing this:
client.chat.completions.create(
model="gpt-4-turbo-preview",
messages=[
{
"role": "system",
"content": "You are a virtual knowledge-base assistant, you answer questions and give instructions"
"based on this given content:\n" + context,
},
{
"role": "user",
"content": prompt,
},
],
)
ChatGPT will be able to give proper answers taking the newly given context into account.
So what I am searching now is an api where i can create a custom gpt by feeding it context like this (via API). I looked at assistants and fine-tuned models but while assistants seems to be enabling me to attach third-party applications, fine-tuned models rely on well structured prompt-answer training data neither of which reflect my use case.
Any hint whether this is possible or not is well appreciated.
It’s confusing because people often conflate chatgpt with the API models - when you say chatgpt and custom gpt, do you mean the actual chat.openai.com chatgpt?
none of the stuff from platform.openai.com (the API), like fine tuning, assistants, chat completions, etc works with the chat.openai.com stuff.
Ok, sorry for mixing up terminology. I guess I just want a bot that i can Feed knowledge incrementially, only via API no Frontend needed. I think the hot Tip in your answer is the knowledge File for the Assistant API. I somehow missed this in my first read.