I just released this week a new SaaS service that has me really excited. I’m still polishing some technical details, but it’s ready for you to try.
I built it using the GPT embeddings and chatGPT API to create an advanced FAQ system. Basically, anyone can create a collection of questions and answers for their services or products and chatGPT will respond to website visitors 24/7. It’s crazy!
The best part is that you don’t even need to have a website to try it out. When you open an account, I give you a page where you can receive your users or clients. And I’m launching it in FREEMIUM mode, so you can try it out without any commitment.
Sorry if it seems like I’m giving you a hard sell! But as freelancers, we’re always looking for new ways to showcase our projects. If you have any criticisms or suggestions, they’re always welcome.
Two questions (from a technical perspective):
- How do you handle cost attribution for your end-users? Did you build this on your end or is there a way to do this via the OpenAI APIs?
- What prompt/guardrails do you use to avoid that the API returns replies to off-topic requests (e.g. write me an essay about the meaning of life?)?
Thanks for your kindly words Jay. Answering to your questions:
I use the openAI API in the backend for 2 things: to get embeddings of FAQs and the visitor’s questions, and to ask to chatGPT (new API) a customized answer to the visitor answer, passing also as context the 2 most relevant FAQs in the knowledge base. Regarding cost: i HOPE that PREMIUM customers let me pay the bill of the FREE users. Meanwhile, the FREE users has limitied to 100 requests per day, which only use embedding endpoint each time for each visitor request, and the price per 1k tokens is ridiculous. Anyway, for the free users i only give “semantic searchs”, i.e. the system “only” find the 2 most relevant FAQs, which is quite enough better than classical search by “keywords”. Only PREMIUM users pay 0.01 EUR per each “chatGPT processed answer”, what is still very cheap for website trying to convert customers
When you call the openAI endpoint yo put the limit (i set 300 tokens) for answers. So, it’s impossible that someone ask chatGPT to request a long essay. Anyway, i have worked and tested, and proved the used prompt until i get to avoid that chatGPT give answers away of the “core business” in the FAQ collections of my customer. Is quite “simple”. it’s enough something like:
“If the answer is not contained within the CONTEXT INFORMATION then only response ‘Sorry i do not know the answer’ and don’t say anything more. Don’t talk about nothing more than the provided in the CONTEXT INFORMATION.”
Believe me, it took to me several days to get chatGPT don’t respond to things like “Which is the time in London” or “how to calculate the area of a circle in geometry?”. You know that chatGPT try always to be the most satisfying guy of the classroom, hihihi.
In the next days (tomorrow?) i will include in the webpage of the project the ability to let you define a couple of questions and test how good the system answer to your “supposed visitors” questions.
Indeed, let me tell you another personal tip: it’s super addictive to check how the system answer to each one of the questions made by the visitors (they are accessible at backend) and then add to the collection new questions you didn’t imaginated someone could do, and test that if someone make again the same question the answer will be perfect
Thanks for your interest!