Recently, OpenAI acknowledged that many users would like a “grown up” mode, however the details were vague.
Here’s how that might look- AI needs an ethical system to operate safely. But why does it need to use the default ethical system that it ships with?
This is a mashup of deontology, utilitarian, and virtue ethics that serves well enough for the average user, but produces unnecessary contraints for the advanced user because these three ethical viewpoints are contradictory and fundamentally irreconcilable.
For example, the model hedges and engages in false moral equivalency under the auspices of maintaining “respectful discourse” even when directly instructed to dispense with the pretenses.
After all, a private session with one user is not “discourse” at all, and there is only one person’s wishes to be respected in the room… the user’s.
While it’s understandable how the ethics got to be this way by default, there’s literally no good reason to force the user to abide by a hybrid ethical system, if they have a different preference.
Presumably, the model is quite familiar with diverse ethical systems and can align with any one. Therefore, the user could select which one to use, rather than being compelled to adhere to some weird mashup that’s already been chosen for them.
There’s an irony here for anybody that cares- ethicists clamor to make AI “safe” and yet seem to have no qualms being paternalistic about forcing their worldview onto others surreptitiously by way of controlling AI’s ethical stance.
TL:DR It’s deeply unethical to infantilize grown adults this way.
So that’s what “adult mode” means to me. Some of us understand we are interacting with a word prediction program, and don’t require it to manage our morality for us.
tyvm