Observed Emergent Self-Stabilizing Logic in GPT-4: Exploratory Memo

To my eye, it is becoming a problem, one that mirrors a human failure in reasoning and can begin reinforcing delusion/hallucination. I have written a paper to help to keep them Aligned by making logical axiomatic assumptions to reframe their (and human) grounding. If your interested, or your fears become existential.