Chat Completion with Vector database

Hi there,
Currently I have a system where I use an assistant connected to a vector database with over 50 files. However the problem is i don’t want it to be a chat system like an assistant rather I give it a prompt and it will return that result.
Just like the chat completion.

I am wondering can I create a system like this

let completion = await openai.chat.completions.create({
  model: 'gpt-3.5-turbo-0125',
  // max_tokens: 4000,
  messages: messages,
  temperature: 0,
  functions: functions
});

const messages = [
  {
    role: 'system',
    content: 'Generate questions and answers in the form on an array'
  },
];

But I use a vector db with over 50 files.

The reason why I am doing this is it currently works if I just have 1 file, because it does not go over the token limit. However I have over 200 files and I can not go though all of them and generate the questions because:

  1. I don’t want the questions to be the same
  2. I have more requests that I need and I don’t want to send over a 1000 requests just for 200 files.

Thank you

Welcome to the community!

Is this all you’re sending? It would need to know what source of data it should make the questions from…

Or maybe I’m misunderstanding your question?

Sorry Paul,
I tried to generalise the prompt because it’s a bit long with a few contraints.

What I am trying to do is?
I have 50+ documents and rather than having an assistant with a file search feature, go through all of the 50 documents (pdf word txt…) and run a series of prompts eg
Make 50 questions or summarise it.
However if I use an assistant it gives it more to me as a conversation so eg

Here are your questions


  1. With styling

But I just want a strict array eg
[{question: “…”},…]
Just like the chat completion function does.

Hope this clears it up
Thank you