I am currently developing a chatbot off ChatGPT API for my business which is aimed at training employees and serving as a Q&A for customers.
So far, I have trained the chatbot on only 240KB of data. However, I have noticed that my prompts need to be extremely specific in order to warrant a correct response.
I used this guide to make the chatbot:
beebom how-train-ai-chatbot-custom-knowledge-base-chatgpt-api/
How do I make it so that I can make more generalized vague prompts that still output correct response AKA improve its general thought?
It’s an issue with the May 12 model. Apparently there is a bug that is causing it to generate nonsensical responses and not work as efficiently as the previous model. I would just wait for the next update tbh.
If you write an API call that stores the information into a database and use the database going forward to filter prompts it may work since it will attempt to match the customers words to the database before calling gpt to reply (aichat does something like this and I have written persistent database prompts for it that work but I’m 1 person working with 3 others but none of us use this for work we are just playing around)