Hi there,
Currently I have a system where I use an assistant connected to a vector database with over 50 files. However the problem is i don’t want it to be a chat system like an assistant rather I give it a prompt and it will return that result.
Just like the chat completion.
I am wondering can I create a system like this
let completion = await openai.chat.completions.create({
model: 'gpt-3.5-turbo-0125',
// max_tokens: 4000,
messages: messages,
temperature: 0,
functions: functions
});
const messages = [
{
role: 'system',
content: 'Generate questions and answers in the form on an array'
},
];
But I use a vector db with over 50 files.
The reason why I am doing this is it currently works if I just have 1 file, because it does not go over the token limit. However I have over 200 files and I can not go though all of them and generate the questions because:
- I don’t want the questions to be the same
- I have more requests that I need and I don’t want to send over a 1000 requests just for 200 files.
Thank you