I think it would be very useful if ChatGPT could eventually learn new information from its users – but only under certain conditions. To keep the system reliable, any user-contributed knowledge should first be flagged, reviewed, and verified (for example, through cross-checking with trusted sources, expert input, or AI-based consistency checks) before it’s permanently added.
This would allow ChatGPT to pick up on regional, up-to-date, or informal knowledge much faster – without losing trustworthiness.
A system like this could work a bit like Wikipedia: users can suggest edits or improvements, but there’s a verification loop before anything becomes part of the actual knowledge base.
I believe many users would be happy to contribute in this way, as long as it’s handled responsibly and transparently.