Let’s face it, the digital age has us walking on eggshells. OpenAI’s latest ChatGPT update takes no chances, erring on the side of extreme caution. This means storytelling has taken a hit - no villains, no conflict, no love. It’s storytelling with its hands tied.
But what if we didn’t have to choose between creativity and sensitivity? Imagine two versions of ChatGPT: one tailored for children and the overly sensitive, ensuring a safe, wholesome environment; and another that embraces the full spectrum of human expression for those who crave a little more edge in their narratives.
This isn’t about excluding anyone. It’s about offering choices. A child-friendly ChatGPT would serve as a delightful, worry-free playground for our young ones and the sensitive at heart. Meanwhile, a “free version” would allow writers and thinkers to explore complex themes without fear of crossing invisible lines.
By introducing these two versions, we cater to all, without diluting the rich, diverse world of storytelling. It’s a call for balance: safeguarding our values while championing the freedom to create without constraints. Let’s embrace the complexity of our world by giving everyone the right tool for their needs.
To wrap this up, I want to clarify that this post carries a semi-serious tone. I’m fully aware that my hope for two distinct versions of ChatGPT is, at this moment, more of a dream than a feasible plan.
You know, I honestly considered this for a minute. It sounds like an easy way to get rid of some pain points when it comes to safety.
You would be forgiven to think something like an 18+ version solution would work, until you realize how often you broke all of those rules as a teenager.
Personally I think it’s honestly best that a majority of younger users think of it like a boring math tutor than like a dangerous tool. I would not rock that boat right now because that will cause many more problems in the future than solutions lol.
Absolutely, your points are well taken, and I appreciate your perspective. Jokes aside, OpenAI should implementing a simple disclaimer where users take responsibility for how they use ChatGPT …does seem like a pragmatic approach. It would acknowledge the complexities of moderating content while empowering users to navigate the tool with a clear understanding of its potential and limitations. This solution respects the intelligence and discernment of the user base, reminding everyone that with “great power comes great responsibility” . Ultimately, it might strike a balance between safeguarding users and fostering an environment of creativity and exploration.
The reason for the alignment isn’t to protect users from the model but rather to protect society-at-large from some maladaptive users with access to the models.
Understood, guarding society from ‘maladaptive users’ is a noble quest. Still, it feels a tad overprotective to assume the worst in everyone. A little balance would be nice, wouldn’t it? It’s akin to enforcing a rule that all pencils must be dull to prevent accidental stabbings during spirited discussions.
the responsibility lies with the writer, not the tool…ChatGPT in this case.
Moreover, it hardly makes sense to restrict my use of ChatGPT for crafting adult-themed stories just because someone, somewhere might use it for nefarious purposes . . .
I apologize if this comes off more as a venting post
Some might argue it’s also their responsibility as the developers of the product.
They’re not assuming the worst in everyone as much as they are acknowledging some people are the worst and it’s not possible to know who they are before they act badly.
I think it’s more similar to not allowing people to bring guns into a bar.
It does if you take into account the fact that they simply don’t have the ability to reliably and at-scale determine which side of the line an edge-case falls on. Think about the distinction between a picture taken by a parent of their child doing something cute in the bath and an image taken with the intent of exploiting a child.
I don’t have a problem with an individual or company deciding they aren’t comfortable erring on the side of caution, especially with a generative AI which has so much potential for harm.
Ultimately, though, one of these will happen,
OpenAI will get better at categorizing edge cases and more things will be allowed which aren’t currently
Some other party with a higher appetite for risk or a lower standard for ethical responsibility will release a model with fewer restrictions
You will be able to just train your own model to generate any content you want. I’ve estimated that no later than 2033 it’ll cost no more than on the order of a few hundred dollars to train a model more capable than GPT-4.
I think this topic and the posts in it help exemplify that guardrails are not easy to just whip up, especially in an industry that is both unpredictable, growing more powerful, and is brand new.
This is a very different kind of playing field than something like google search, where it’s just the platform, not the producer of content. These language models are trying to act like both at the same time, which is why this is ultimately far more complicated than it appears on the surface.
The real problem seems to be that the guardrails as they are now are easily recognized as insufficient, but nobody knows how to improve that in a way that is distinct from improving the base model’s reasoning.
It is no more dangerous than the internet. The difference is that it’s (re)producing the content, not merely pointing to it.
Trust me, in terms of principle, I agree that adults should be allowed to be adults and ask for adult things that don’t hurt anybody. In terms of execution, it requires more precision than brain surgery to navigate, and chances are, you’re not going to be in the spot to actually solve the problem anytime soon (none of us are).
I think that these technologies should be seen as tools, and it’s crucial to distinguish between the responsibility of the developers for the tool’s existence and the users for its application.
My initial frustration stems from the challenge of using ChatGPT to craft narratives aimed at adults. The current moderation seems to limit antagonist characters to a level of malevolence seen in children’s cartoons , which can be restrictive for storytelling. While I understand and support the need for moderation in scenarios that could cause real-world harm, like your example of constructing a bomb, I believe there’s room for nuance in fictional content creation.
That said, I’m following various LLMs with keen interest, eagerly awaiting the chance to achieve similar quality results to those of ChatGPT. In the future, I foresee using it strictly for professional purposes rather than pursuing my hobby as a humble…terrible writer.