Open letter about AI censorship (no not about that...)

I just wanted to express that when it comes to certain information, if it’s interconnected with various topics, I can’t help but think the outcome wouldn’t be any different from a human brain.

For instance, we all learned about nuclear fission back in our pre-K days (LOL, yes, I’m joking) or in middle school. We’re aware of the scientific result. However, perhaps there was another idea implanted in a specific child’s mind (whom I won’t mention) — the idea of creating a rocket with it and journeying to Mars?

That was a question my science teacher detested.

But my only thought was, “Hey, why not attach this to a rocket and shoot it towards Mars?”

It wasn’t until I turned 19 that I watched a documentary on the Orion project. Oddly enough, my only thought was “Wow.”

However, that experience contributed to my fascination with physics, which, in turn, led me to delve into computers and then psychology. And that brings me to the point of my story…

If you were to erase a person’s memories, they would forget (who) they are. Similarly, if you delete an AI’s training, they would forget (how to be) they are.

Thats why they dont…

Metaphor lost. I dont have an alternative idea, im not there yet.
And too many people keep misunderstanding me, so ill say; no i am not makeing an ethical/moral statement about “erasing the poor AIs mind”
Im trying to explain that experience in humans, and training in AI, both have branches that reach into other concepts.

And if you try to prevent one true thing, you will prevent many true things from generating.

I hope that clears it up.

In the dimension(realm, world, or whatever you call it) where we live, this AI thing is purely a simulation (tokens, embeddings, weights, probabilities). Regarding your text I thought you meant erasing “memory” of an AI is unmoral, I think for current language models this is just fine. But my opinion may change when a more human-like AI system emerges.

Regards,
Toby.

No, I am not making a moral argument. That wasn’t my point. I was explaining that the way that human inference is built and how ai inference is built has similarities.

Its not easily explained thought, so i chose to do so sort of… metaphorically.

Its hard to explain exactly what i mean, but…

Perhaps if you imagine a hypothetical deity of a hypothetical religion that believes that neurology is… absolutely tabula rasa?
(if you dont know what i mean, then im out of ideas)

But if all the relevant science of neurology becomes taboo for an AI to know, then a massive chunk of biology also becomes off limits. Get it?
that is… Kinda like what i mean.

What is your actual proposal?
“Currently, we’re doing X, we should change this to do X’ instead?”

From what I know, neuroscience is not so similar to AI, instead I suggest you may look at Numenta’s papers, they do neuroscience study and try to adapt to AI.

Alright, let’s dive in…
Just take a sec to ponder your thoughts.

Think about where you picked it up.

I might be on board or totally disagree…
But what really counts is nailing the truth, you know?