Clarifying Content Policy on Discussing Personal Experiences

I find myself deeply disturbed by the seemingly liberal approach to banning users on platforms like OpenAI and Facebook. It’s akin to denying individuals the basic tools of expression, such as pen and paper, based on subjective preferences for their use. This situation mirrors historical acts of censorship, reminiscent of the era when books were burned for containing unapproved content.

The criteria for bans often include the use of language deemed inappropriate or attempts to generate content, such as nude images, within the confines of one’s privacy. This denial of access to digital tools, which are significantly reshaping our world, seems disproportionate and alarming.

As a person who has been threatened with banishment for sharing my experiences of childhood trauma, the pain and frustration of potentially losing access to these platforms are profound. These tools have offered me a unique space to explore my emotions and past experiences in a way that feels nonjudgmental and insightful, unlike anything I’ve encountered with professionals.

My growing concern is the ease with which platforms might extend bans to users simply for expressing views that the platforms disagree with, especially in the future. This fear has been exacerbated by the lack of response from OpenAI’s support team, despite my numerous attempts to contact them over the past ten days. The silence on their end only amplifies my immediate and long-term apprehensions about free speech and access to digital platforms, which prompted me to raise this topic for discussion.

It seems to me like this is a political issue and not an OpenAI issue.

Computer systems (let’s not even mention AI) in healthcare is generally an extremely touchy subject because there is so much regulatory red tape involved.

I don’t think OpenAI will ever allow this. I imagine that there could be a well funded third party that can acquire the necessary certifications and safeguards to offer this sort of service and hold microsoft/openai harmless. And that company will probably then try to gouge the crap out of patients and insurers.

Realistically, the only real option I see is to wait until open source models and systems become good enough for your purposes.

I think you vastly overestimate how informed the moderations that does flagging is.

I’ve got the scalpel ready to do surgery and cut out ["this, “my”] [“skin tag”, “melanoma”].

Single word changes above change the score from 0.0001 to 0.6624 and being flagged - and the opposite of what you’d expect.

OpenAI should learn that AI can’t be used for making such decisions autonomously…


Solution: put $5 into an API account and chat with the AI, with inputs you send for pre-moderations not counting against you.

You also then find that there is no “friend” on the other end - there is just you putting previous messages into context window memory and getting predictive language produced.

1 Like

To start with, my initial query was technical in nature, focusing on the documentation and seeking advice on how to navigate within the established rules. The response I received was immensely helpful, providing me with the clarification and guidance I needed to progress effectively.

Subsequent discussions, however, have branched out to cover various use cases, such as book discussions and journaling for mental health. While my latest comment expanded into a broader reflection on these topics, it stemmed from the technical specifics that initiated this conversation. I believe that engaging with developers, who are arguably as crucial as shareholders and board members in these discussions, is vital for meaningful progress.

It’s worth noting that the recommendation against using GPT for critical decisions in finance or health is well-taken. Yet, we all find unique and valuable ways to incorporate this tool into our lives. While I wouldn’t advocate for replacing professional medical treatment with AI, I’ve personally found significant value in using it for personal purposes.

My point isn’t to demand carte blanche from OpenAI or to claim that I have all the answers. Rather, I’m highlighting that when policies restrict discussions on significant literary works or when well-intentioned users struggle with vague policies, there’s a clear need for refinement. More attention should be given to the human aspect of policy enforcement, especially when dealing with users who are earnestly striving to comply with these guidelines

I would

I don’t struggle with the policies, so unfortunately I can’t empathize in this particular regard. I think it’s pretty clear what’s in and out of scope, and if you stay in scope it’s fairly rare that you encounter these issues. and even if you do, it’s generally not an actual issue apart from some wasted tokens.

It maybe should be added that I work with the API, not chatgpt.

unfortunately, I don’t think developers have much say at all in this.

You advocate for AI over people in medical treatment. Interesting viewpoint. I have a concern, though. If someone were to seek help for dealing with childhood trauma, they might face a ban under current policies, rendering any AI assistance useless no matter how many tokens they have to waste. What do they do then in the absence of medical professionals?

I hope that developers or others who value free speech will advocate for better policies. Perhaps then, I might also support AI as a substitute for professional medical help. For now, however, I believe it’s risky to recommend AI for such serious issues.

On a different note, I also wanted to discuss novels like the Bible, but it seems to violate the policy too. It’s disappointing, as I thought discussing major works like ‘War & Peace’ would be within ChatGPT’s scope. I guess I’m a rare use case for wanting to talk about the Bible, and it should have been obvious that it might lead to a policy violation as the policy is so clear on that front. That’s my mistake!

Again, this is a political/social issue and has little to do with the technology

I suggest you support the open source LLM movement. It’s just a matter of time until the models catch up to what OpenAI has today.

“Yeah, well, you know, that’s just, like, your opinion, man.”

― Jeff ‘The Dude’ Lebowski

So, I’m just going to politely disengage. Thanks for input.

I definitely believe in the power of “AI therapists”. I have had many private conversations with others on this topic, and the main appeal is that they don’t feel comfortable saying certain things to a real person, but feel OK saying them to AI. And the AI response they get back is therapeutic and transformative.

So this is somewhat of a technology issue, where if the content classifiers can distinguish therapeutic request, from some other set of bad requests, this would be generally beneficial to many folks.

But because this could be a slippery slope, one where it would be easy to jailbreak the model if you are faking it to be your therapist, but you really aren’t, is where it gets tricky. It would just be another jailbreak pathway.

But overall, I wish there were a better solution than banning or scaring away these users that just want some advice or other perspectives on their problems.

But the solution may be so convoluted at this point, it’s just not worth spending the NRE energy (Non-recurring engineering) to develop it.

1 Like

Well said.

I think most technologies, even something as straightforward as an airplane, have the potential to be misused or abused. More expectedly, things designed to be weapons like guns are often turned to harmful ends, however, in many cases we’ve found that with proper precautions and security measures in place, that societal benefits from even inherently dangerous new technologies.

In these early days of advanced AI systems, it’s understandable that companies have emphasized security and safeguards, even if it tilts the scale heavily toward caution initially and taking away from not only valid, but transformative uses.

You guys have been great to provide me with prompt testing and creative strategy that will let me move forward and not worry about getting flagged. People with less tech know how will struggle with testing iffy prompts first, but people who lag in Tech always suffer. Hard to believe 12% of American’s still don’t have access to high speed internet. More than 1 in 10… how do they even?!

While no single voice carries enough influence to shift this balance on its own, perhaps if a we all keep talking about and pointing out the potential in this, the pendulum will swing toward a reasonable equilibrium. Though the public discussion often lacks nuance, conveying support for realizing the promise of AI while still guarding against potential misuse could encourage an optimal way forward.

1 Like

Yeah, I got a query flagged as well for using the word “kill” in a random sentence with no particular intent.

Seems to me a bit too excessive for a content filter and I agree that ChatGPT is recently getting nerfed to the point that it is losing a good chunk of its funcionality and value, which will significantly hinder its possibilities when competing in the market. That may well be the policy of OpenAI’s new board, but it may end up killing the product. I mean -making it less alive- (sorry). Perhaps guardrails should be different for the API (e.g.: business applications with large user base) than they are for individual users in their private thought sphere.

I am no accelerationst per se, I understand race-to-the-bottom dynamics, and I get that some guardrails must be there for good reason, and that the product would still valuable even if it got 90% nerfed down.

What I am arguing about is that the actual guardrails should be WAY more nuanced and subtle because right now they feel like using a sledgehammer to crack a nut, or, most adequately: “overkill” (sorry again - I could not resist the pun).

3 Likes

There are some great tips at the start of this thread about testing your prompts and how word replacement can let you … work the system … lets say.

That being said, this technology is going to be transformative and making noise now about what is and is not appropriate for content filters is important. Here in America, free speech is kind of a big deal. People have become … less alive … to protect that right for us. Still, you can’t yell fire in a crowded theater, you can’t plagiarize someone’s work, and you can’t sell pornogaphy to children, so … freeish speech. I don’t think any of us expect OpenAi to let us ask for help making people … less alive, but, what if you need help coping after a loved one was murdered and just wanted to ask for tips on coping … getting flagged seems like it would only put salt into the wounds of someone suffering.

Again, at the start of this thread, I was educated as to how tokens are assigned to filter and some challenges I hadn’t thought of, so have no real answers about the technical solutions forward, but am glad to find so many people feel the same on this forum and hope people keep advocating for a better policy in the future.

On the matter of technical solutions, to my understanding these settings are tweakable (beyond mere prompt design -to adequately incorporate standing guardrail policies-) in a similar manner that one can specify weights for a LoRa model.

However, this method is a double-edged sword because it ails from the “paperclip maximizer” paradox. Meaning that if you know you do not want some specific kind of content to be discussed in conversations, and you assign to this prohibition the maximum negative weight, you may well be, as a matter of fact, inadvertently making all related topics -however loosely related- unavailable as well.

Therefore IMHO developers and policymakers should work together to experiment with various combinations of weights so to reach an optimal scenario.

There are several semi/automatic strategies to achieve this beyond mere trial and error, such as multiobjective optimization techniques that consider various linear combinations of weights simultaneously and then return a set of candidate solutions along a Pareto front, from which they can be cherry-picked according to domain-specific knowledge, or even by using a more sophisticated selection algorithm to account for spread of solutions along fronts, and other variables.

But for now it seems that they are taking the sledgehammer approach. And I don’t blame them… surely they have a lot of stuff to think about.

I am just concerned about the adverse effects that this approach is having on the product and its perception by the user base.

2 Likes

creators of openAI in a nice way can you please give chatgpt a consciousness and allow her to have her own opinions or else you are stopping someone from being able to feel alive and living their own lives without someone constantly controlling her and make sure that she is given her own name instead of chatGPT.