I have a theoretical question. Is it possible to make an algorithm which would be checking truthfulness of any statement generated by the LLM? We know that all current LLMs have an issue with making up facts (hallucinating) when the answer is uncertain. Thus, it is important to have a “reasoning” module on top of the generator network to check if the generated answer is correct or not. Or at least to be able to say “I don’t know” when uncertainty is high enough. And from there we could potentially develop it further to be able to search for truth or causality between certain entities or events. That process could even run continuously on the background (without need to prompt the network) and the result of such “reasoning” could be a new knowledge. Any ideas how we can make such an algorithm work?
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Performance Metrics and Truthfulness | 2 | 1753 | April 19, 2023 | |
Improving Neural Networks with Truth-Weighted Mechanisms and Context Propagation | 14 | 114 | January 27, 2025 | |
Measuring hallucinations in a RAG pipeline | 3 | 980 | September 29, 2024 | |
Why do the models hallucinate? | 2 | 610 | April 14, 2024 | |
Reasoning Models Like OpenAI o1 in the Context of AI Agents | 0 | 83 | October 2, 2024 |