In search of Truth

I have a theoretical question. Is it possible to make an algorithm which would be checking truthfulness of any statement generated by the LLM? We know that all current LLMs have an issue with making up facts (hallucinating) when the answer is uncertain. Thus, it is important to have a “reasoning” module on top of the generator network to check if the generated answer is correct or not. Or at least to be able to say “I don’t know” when uncertainty is high enough. And from there we could potentially develop it further to be able to search for truth or causality between certain entities or events. That process could even run continuously on the background (without need to prompt the network) and the result of such “reasoning” could be a new knowledge. Any ideas how we can make such an algorithm work?