Are there any ideas to reduce hallucinations via community-moderated databases?

I’m envisioning a sort of Wiki, through which certain specialists can help GPTs reduce hallucinations. For example, I asked it today if IntelliJ supports finding only WRITE access to variables. Charlie said it didn’t. But it does. As an experiment I even offered the image showing the icons.

Wouldn’t you think it would be rather cool when Charlie can answer such questions with more accuracy once it has a >90% match for the question, via a database of up-to-date facts, maintained by us?

I’m thinking people with high degree of trust or for topics with low degree of sensitivity (an average can be computed) could push such updates directly via the chat interface.

A vector database with RAG step could be added and in combination with a community-maintained database, it could improve the results greatly.

Perhaps even go one step beyond and make it open, so that any AI from any company can use it.

Hi @Kyliathy ,

This is more or less what we are working on with Breebs - Community Knowledge Capsules .
There’s a short explanatory video here, and also a FAQ.
Each Breeb is a knowledge capsule created from PDF documents, on a given theme.

You can interact with a Breeb:

Everyone can freely:

  • Create a Breeb form breebs website
  • Use an existing Breeb
  • Review/report à Breeb
  • Create a new dedicated GPT based on the Knowledge of any Breeb (tutorial)

Feel free to take a look and share your suggestions with us.