Is it possible to add a confidence level to ChatGPT responses when it gives an answer? Some of the strongest criticism I have heard is that GPT gives very confident wrong answers sometimes. The confidence level informs users when to be cautious about the information it gives.
bump to revive this topic. I am very curious to see if this is possible or not.
Here to revive also for me: it will be very helpful.
well now that you can get the logprobs from the openai response, it seems reasonable to get an average log likelihood per token which might be a proxy to confidence of answers. Probably it wouldn’t be robust, but maybe better than nothing.
In addition - simply doing an additional GPT call to ask “how confident you are of the answer on a 1-10 scale” would probably be the best approach.
Totally on board with what @dan.raviv mentioned. I’d like to chip in with another angle and bring Retrieval Augmented Generation (RAG) – a bit of a step up in complexity but worth it if you’re working with the API.
In essence, “RAG takes an input and retrieves a set of relevant/supporting documents given a source (e.g., Wikipedia),”. You can then use this enriched context to craft its response.
Your data source could either be an extensive external knowledge base (Internet, another API, …) or an internal vector database. Following the RAG process, you could implement a confidence measurement mechanism on the information retrieved from the knowledge base.
I’m very interested in this too. I’ve been working to include a confidence meter in a meeting summarizer built with Chat GPT’s API but haven’t cracked it yet.