Quick question please as I am a full stack developer.
I would like to create a live chat that replies to clients using a set of questions and answers from a file.
Could you please tell me, using OPENAI which model should be used for this and if there is a way to upload a file with all the questions and answers directly within OPENAI.
I saw some answers about using a database but I am worried of the delays between taking the data off my DB, analyzing it and replaying to the client.
The best would be a set of responses data set directly from the source of OPENAI.
I would really like to have a bit of information on this as I am not really sure how to use OPENAI for this kind of usage.
I am ok to generate text from the API, but not based on preset data.
You’ve made a conflicting statement of your desires; the type that might confuse an AI also
The AI is much more inclined to answer from pretraining and knowledge than it is to report directly from injected knowledge. However, AI can be instructed and prompted.
OpenAI has an agent framework called “Assistants” which indeed has a file upload and a retrieval search tool, along with simply providing AI some document text. However, you don’t have control over how data is chunked or further techniques, so an “answer search” would not necessarily have single answers or complete answers. Then, the automatic injection of potentially irrelevant documents in combination with instructions to only answer from then could be quite problematic.
Assistants is exceptionally slow, also.
I would use semantic search with embeddings. Since you already have questions to be matched with, which you can send for embedding, I expect high-quality returns.
Then, you can either automatically inject them in to your chat completions API endpoint call, or let the AI use a feature called “functions” to make requests using terms it writes.
Hope that helps. I’ll make way for the brilliance of what EricGT is typing.