How OpenAI can become a more fact and credit-based information resource

Hi there!
This is a stray idea, but likely along the lines of what openAI is currently modeling. In order to demonstrate fair content usage, would it be conceivable that open AI more openly disclose sources used in generating answers and training AI models, and then create a framework for content producers (especially those that traditionally rely on ad revenue, such as news outlets) to claim/limit access to their content unless a royalty contract has been made? In this way, openAI can compensate content writers similar to streaming services such as Spotify, set to a rate that is agreeable to both parties. Moreover, to become a more academically useful program, open AI could form agreements with academic databases to access beyond their paywalls, if subscribers pay an additional premium. This, as ChatGPT becomes more accurate in disclosing where its information comes from.

Does this seem like a feasible business model? As it stands currently, the lack of academic resources and clarity of where information comes from makes ChatGPT a useful tool, but relegated in many of the ways WikiPedia has been relegated. But the potential scientific applications of this program and its sustainable role in the information ecosystem are ever expansive.

Hi @h2y7yysdqd and welcome to the community!

Part of what you mentioned is being covered by SearchGPT. In this case, the results will provide valid references/citations, which may make it better from credibility standpoint. OpenAI has established and is continuously establishing new relationships with publishers such as News Corp and Axel Springer. They’ve also established partnerships with Arizona State University and Columbia University.

So in nutshell - part of this is on-going.

1 Like

You can have whatever papers you want, until it reads graphs, it’s unsuitable for science.