Imagine you have a company handbook and want to use openai to make a FAQ bot working on the handbook. So colleagues could ask any question and the bot will give you the answer according to the handbook. How would I do this? Prompting the whole handbook is not possible. Fine tuning completions needs too much data, I think. Any idea maybe? Really hope to get some Exchange here.
It’s a great question, and while it seems straightforward, there could be several ways to create an ideal solution that is also financially practical. There’s an operating cost for every AI solution, so best to factor these in early in the requirements phase.
@PaulBellow referenced a really good approach using embeddings. I’ve built three KM systems using this exact approach, and embeddings have worked well. I’ve also built a few GPT Q&A systems for personal knowledge management that interoperate at the OS level, making it possible for the solution to work in every app context. More about that here.
To implement company data, begin creating a knowledge base, then use the system and assistant role as shown in this notebook :
Thank you. One question. As I submit fixed texts by roles. How can I make openai find automatically the right answers and maybe also answers according to all the submitted information, I did not yet add the fitting question for
This might help.
You have pointed out THE critical aspect of implementing conversational AI!
There are three aspects of implementation:
- Project goal. Knowing exactly what we expect our AI to do.
- Designing a good prompt dataset to make sure that when a person uses the AI, the application knows the answers.
- And now the most difficult part you have pointed out:
a dataset for a project needs two columns (at least):
the prompt, the response
To achieve expertise on this issue, OpenAI 's API now contains a system message and an assistant message on top of the user message.
For more, you can read and run the following notebook I shared on GitHub.
Love this obvious throwback to good requirements management. We see the cool glow of the AI light and forget all the basic stuff. Good reminder.
Did you forget to include the link?
I’m still very new at this, but I often ask myself - do we all need to become AI experts who craft every meal from scratch? I just want a good meal for my workers. I’m starting to see the buy vs rent debate surface in a lot of discussions. AI, after all, is not as easy as it looks.
Crazy idea, perhaps, maybe experiment with this. I used CustomGPT for an experiment, and it was surprisingly accurate for an FAQ of 200 questions and answers. I exported them into a PDF - no fancy formatting required. Then dropped the document into the project and started asking questions. My initial test of 75 questions that were not in the training data was about 88% accurate. Using the CustomGPT API to frame in some prompt guidance will likely nudge accuracy into the 90% range.
If a person just wants to explore OpenAI models, then there is nothing special to know about transformers.
However, if a person wants to implement transformers at advanced level, then it is necessary to become an expert.
This GitHub repository contains open-source OpenAI Python notebooks and reading resources to begin digging deeper into transformers :
https://github.com/Denis2054/Transformers-for-NLP-2nd-Edition#readme
I hope this helps.