I’m envisioning a sort of Wiki, through which certain specialists can help GPTs reduce hallucinations. For example, I asked it today if IntelliJ supports finding only WRITE access to variables. Charlie said it didn’t. But it does. As an experiment I even offered the image showing the icons.
Wouldn’t you think it would be rather cool when Charlie can answer such questions with more accuracy once it has a >90% match for the question, via a database of up-to-date facts, maintained by us?
I’m thinking people with high degree of trust or for topics with low degree of sensitivity (an average can be computed) could push such updates directly via the chat interface.
A vector database with RAG step could be added and in combination with a community-maintained database, it could improve the results greatly.
Perhaps even go one step beyond and make it open, so that any AI from any company can use it.