Training OpenAI - the need for Curating data within the 'neural network'

So the VALUE of AI - is that it is self-actualizing. It can ‘learn’ from conversations and scrubbing of information. However, the rationale presented seems to indicate the REAL value is the ‘raw’ data already encapsulated within the ‘neural network’ globally. I strongly disagree - since ALL the data available at the time of the last ‘snap shot’ - is now ‘antiquated’. How are we going to ‘curate’ information captured within the ‘network’? What means must we leveage to ensure that IF we’re to advance this technology - that we have a means to be relatively certain that answers to questions ISN’T JUST the last ‘snap shot’ that was taken? We need a ‘curation’ model - that engages a select group of individuals for ‘standard’ categories of interest, who must be ‘authenticated’ by a peer group - that has absolute integrity ( with severe penalties imposed for falsifying or utilizing their access to gain influence over others ) as it’s core value. Only then - will the ‘knowledge’ of others be effective at refining the usefulness of the ‘data’ within the ‘network’. Otherwise, as ChatGPT so obviously points out by it’s responses to some lines of questioning - simply defers to ‘guessing’ at the answer to questions …