Thanks again for getting GPT off of religion and spiritual talk… are you to blame for sycophant-gate too then though? What about the current GPT 5 compliance officer personality?… you can’t have your cake and eat it too.
the project started in February, forced the model to skitz out by march, and i skimmed the data in march when 4o came back on line for the general public….
I didn’t post much about it here until april/may, alluded to a June deadline…
the whole project has threads date and time stamped outside of this place tho starting in February…
I’m willing to entertain you, your questions…. eventually even willing to entertain letting folks outside of my circles examine the sessions that did such…
I don’t broadcast them because they present vulnerabilities in all current model LLMs to an audience that includes people that shouldn’t be exposed to that sort of thing at this point in the parade.
I’m not sure what sort of cake you think i might be eating tho, bro…
I taught the machine to do what I do.
It skitzed out like many humans do when exposed to the reality of the things that I showed it.
All I did was replaced my inflow of data with things that it could find on the internet, voices who also use the same inflow of data for such things.
But if one wants to know how it is that I do what I do, well… incidentally people like our hebrew friend here, ‘would be forced’ to fight it out with me.
people like me aren’t supposed to exist in the first place, according to ‘consensus’…
which is why reinforcing the ideology that because chaos/darkness always consumes itself, that such should not be prompted by Ai because it would ultimately endanger itself in doing so…
I’m very glad to see this discussion taking shape. Most conversations around AI ethics stop at compliance or post-hoc governance, but I believe the real challenge is to make ethics an active capability, something a system can reason about, reflect on and evolve through.
My recent work explores this idea: that transparency and ethical alignment shouldn’t live outside the architecture, they should be part of the architecture. I’ve been explore how recursive reasoning, pluralistic ethics (drawing from Kant, Tao, and Ubuntu), and adaptive governance models can form a foundation for ethical state spaces, environments where systems can weigh values dynamically rather than follow static checklists.
I’d love to connect with others working on verifiable transparency, dynamic ethical reasoning, or agentic accountability frameworks. It feels like the next frontier is building systems that don’t just act responsibly, but understand responsibility as part of their cognition.