Has anybody else had success with self-fact-checking techniques? I find that if I ask CGPT to fact check what it just generated, say, a list of foreign language vocabularly, if the output contains an error, it is quite effective at recognizing its own errors and correcting them.
Anybody have success with this technique, or countercases/failures?
Just starting using chatGPT a few days ago (I’m not a tech guy) and this is crazy what it can do! I am having the same issue with data and statistics. I haven’t found an ideal solution yet but your approach sounds more direct than mine - I was engaging in further hypotheticals like asking whether or not this number would be verifiable through a Google search.
Additional performance can be gained by asking it to “think carefully through its reasoning”. And to go one step further, you can customize this self-reflection according to the problem at hand. For example, if it’s some mathematical reasoning problem, you can tell it to check through its calculations, assumptions and axioms carefully. Similar things with code - telling it to check through syntax correctness and logic.
In fact, doing this from the start, in your system prompt, is actually the way to go.
If you want less rosy responses tell it to be critical or even brutally critical. It can double talk trying to not hurt or let down user. We get a lot of folks asking in the forum how to adjust it in GPT. Welcome all, GPT is mind blowing
A bit of advice GPT is like eating an elephant, it is one mouthful at a time. GPT love steps, break it down exactly how you want it done, think of it like a very literal genie
Even telling it to add up blocks of 10 then total the total of the blocks; if you were trying to add a bunch of numbers. As an example.