OpenAI Model Censorship Opt-in/Opt-out Debate

As developers, we’re at the forefront of innovation, tasked with pushing the limits of technology to benefit society. AI models, especially those from OpenAI, are vital tools for us in this endeavor. However, I am oddly concerned with the limitations and restrictions that OpenAI imposes by default on its models.

While I appreciate the rationale behind this – preventing misuse and promoting ethical AI usage – I must emphasize that this one-size-fits-all approach is constrictive. It not only limits our creativity as developers but also hampers the evolution of diverse, innovative AI applications.

I wanted to know if it would be possible for an opt-in model, wherein these restrictions and safeguards form part of the moderation library, rather than being inherent features of the AI models. Developers should have the freedom to choose whether to apply OpenAI’s moderation library or to develop their own moderation system in alignment with their unique needs and ethical standards.

I just want to make it clear that I don’t think developers are entitled to anything, as these are not “our” models, but OpenAI has made it clear many times over, that they was us (devs) to create, to build, to test, to evaluate, the whole 9 yards. And, I feel that in order to do this to the best to our ability, we should be able to create our own third party moderation tool or be able to optin/optout of the inherent moderation features while still using openAIs own moderation library OpenAI Platform.

If we are able to do this we can be able to uphold the true essence of development: innovation. We’ll empower developers to make conscious choices about ethical guidelines, encouraging them to build responsible AI usage policies that cater to their specific applications.

This is not just about getting rid of limitations; it’s about promoting developer autonomy and fostering an environment where developers are not just users but also contributors to the narrative of ethical AI.

Let’s not lose sight of the fact that AI is a tool for us to shape, not the other way around. I appreciate all dialogue and discussion, I encourage others in the community to speak up on this issue.

OpenAI has its own requirements that it thinks are best for its business.

You’re perfectly able to train and use your own model if you want. Plenty of innovation happening – even Salesforce is in the LLM foundation business now, it appears.

You can also try the other platforms, like Google or Anthropic or such, to see which ones you prefer.

1 Like

Why have two moderation libraries and not one? Why are we not able to edit the moderation parameters?

This request is very unlikely to be met simply because if Open AI models start spewing crazy stuff then this whole “the best AI models available to the public” ride will be over sooner than later.
You have to follow the open source community to avoid the alignment tax.