Improving Chabot Thought Process

I am currently developing a chatbot off ChatGPT API for my business which is aimed at training employees and serving as a Q&A for customers.

So far, I have trained the chatbot on only 240KB of data. However, I have noticed that my prompts need to be extremely specific in order to warrant a correct response.

I used this guide to make the chatbot:
beebom how-train-ai-chatbot-custom-knowledge-base-chatgpt-api/

How do I make it so that I can make more generalized vague prompts that still output correct response AKA improve its general thought?

Thank you :slight_smile: :crossed_fingers:t2:

Looks like your link didn’t come through. Can you explain what you mean by “training” the chatbot in this context? To my knowledge, there’s no option for further training or fine-tuning of gpt-3.5-turbo or gpt-4 by the end user at the moment.

Regardless—I would look into chain of thought prompting. Either provide examples or explicit instructions for the model to “think” thought its answer or show its work before responding with the final answer.

Here are some resources:

Just providing a few examples of the kind of output you’re hoping to get from the model might also improve performance (In-context learning (natural language processing) - Wikipedia).