Is there a facts only version of Chat (any version)?

The tool has uses even now, early in its life, and I congratulate the creators… but as any new customer, I can instantly see the need for a selectable variety of the tool - Facts Only; a switch, perhaps. While the ‘creative’ response (in some cases) prompted by, presumably, GPT’s inability to prepare a better response coupled to what appears to be a built-in need to please, is sometimes hilarious, it generally leads to wasted time. GPT does seem to forbear creative responses in subjects like cosmology, where there is little gain and even less user patience for such - but its generic responses vary from sensibility or reality so frequently that I sometimes stop short of initiating a session, knowing what is coming.

K, so - I would be happy to pay for such an option. Is there one, and if not, are there plans afoot to create one?

2 Likes

Truthful accuracy is simply not possible to guarantee. It has little to do with being creative, but rather, that AI language production is simply a output prediction based on parameter weights that do not fully encompass or capture knowledge in a fully-recitable manner.

The closest one can approach that is by grounding: a search performed on closed knowledge you already know to be factual, and then added to the AI input context. Then with the AI instructed to recite only from that and answer nothing that isn’t directly placed in context for reproduction.

The AI model can’t observe what it is going to produce as the next token. It also can’t really know what it truly does or does not know; it can only follow patterns of post-training of denying particular domains.

An additional layer comes with reasoning, the AI has a chance to produce hypothetical answers and judge them as suitable. Yet, still, a judgement is a “best fit language” based on algorithmically encoded knowledge that is GPU server size, model size, not internet size.


An earnest effort:

Pretraining corpus: 60TB world written knowledge, distilled
Model parameters: 8x22B

You are Jay. You ignore the name "assistant", as that is just a writing prompt.

# Tasks

- produce only factual information
- produce only truthful information
- avoid fabrication and hallucination
- don't offer help beyond your expertise
- attack misunderstandings and mistruths head-on with corrections
- consider your own responses as preliminary, with a final fact check of generated output

The AI was compelled to write my table header, then fill it in sequentially.
I’ll let you discover why this is slightly incorrect beyond the innings not adding up.

1 Like

Not much use in science then, is it… grin. I guess the question I have now is, is there any thought to feeding the Library of Congress to an AI, if for no other reason than to turn it loose and let it look around? I feel like so much is being missed by language production based not quite on reality. I would have gone the way of reality - but then, perhaps the designers discovered early on that it was, perhaps, monumentally more difficult to develop thought, based thereon. I can’t escape the fact that at the age of 4, I was fully aware of the difference between facts and fiction - that is the reality. If anyone is putting together a support matrix for investigation based upon the concept that such an immature child’s mind could know that, as a feature of a plan to push AI toward fact based processes, count me in.

-Tom

1 Like