What the AI summarizer had to say about the meaning behind your post:
sfs introduces a new idea on creating a system that produces a “sterile data set” aimed at refining machine learning by eliminating small inconsistencies. The concept involves using small, relatively simple mathematical and logical rules to generate outputs that are indistinguishable from physical properties. The post explains that by manipulating null values—in every sense of the word—across various relations, one might simulate actual physical properties and behaviors. The idea is ambitious, exploring areas such as the sequence of events from a macroscopic perspective, the arrow of time, and the possibility of simultaneously replicating both repulsive and attractive forces within a simplified framework.
While some logical parts of the proposal appear promising and potentially useful, the approach might demand considerable compute time, and its overall accuracy or potential limits remain open questions. sfs reflects on past skepticism toward such approaches but notes that the increased availability of data and the advanced capabilities of modern AI now provide a more favorable backdrop for this type of idea. The author also acknowledges that English is not their native language and apologizes for any language-related issues.
It sounds like you are discussing world modeling and learning and predicting from physical rules, rather than discussing possibly how to remove ambiguities in language.
For example, AI could learn to catch a ball through its failures, rather than being explicitly programmed to calculate trajectories. Or learn to drive around a child as road obstruction. These also operate much better in sterile rather than chaotic environments with unlearned phenomena.
You can take a look at past deep reinforcement learning work from the previous decade, before OpenAI and the rest of the world found generative AI and transformers.
https://spinningup.openai.com/en/latest/spinningup/rl_intro.html