Dear OpenAI,
If you really want to make the world a better place it is not by training a LLM on enormous amounts of data, if that data is wrong or biased. Sadly, that is the state of information and knowledge in the world today. There are reasons that most scientific experiments cannot be replicated with similar results. Those reasons are a poor understanding of probability theory and logic, cognitive biases, the need to get published, etc. It is sadly not enough to look at the methodology section of a scientific paper to judge its credibility, it is very hard to pick up biases, such as how the question was framed etc. from just scrutinizing methodology. So, in order to move human knowledge and understanding forward there needs to be another way. That way is by training a LLM on mentioned areas: logic, probability theory, human cognitive biases, human psychology etc. and then use it to analyze all published scientific papers and rank their credibility accordingly. To create a paradigm shift in human understanding it is not enough to be right, you also need to convince others that they are wrong. That is a hard thing to do, especially when people are tied to their knowledge financially and in the very construction of their own self-image. For the first time in history there is now a way to present revised information from an unbiased source, a LLM does not have an agenda. With your resources this would not be a hard task to achieve and it would evolve human understanding in a unprecedented profound way. If this would be done, it would be a clear sign of AI´s positive contribution to the human race.
Sincerely,
Galileo Galilei