Hi! It’s fantastic to hear about your research—I truly believe it’s both significant and necessary. We need more efforts like this to push the boundaries of understanding.
That said, one crucial aspect to consider is that personal axioms—core beliefs and principles—are intrinsic to the human mind, or more precisely, a mental model. A mental model, in turn, is built on the foundation of a cognitive model. However, a large language model (LLM) isn’t a cognitive model. An LLM functions by predicting the next token based on the prompt, previously generated tokens, and other contextual details.
Because of this, embedding such axioms directly into an LLM cannot guarantee they’ll be consistently upheld. What might be achievable, though, is simulating adherence to these axioms to a certain degree. You could experiment with a combination of prompt engineering and a critical evaluation system. This system would assess each generated response against the predefined axioms to ensure they align as closely as possible.