OpenAI’s commitment to child safety

OpenAI is joining Thorn, All Tech Is Human, and other leading companies in an effort to prevent the misuse of generative AI to perpetrate, proliferate, and further sexual harms against children.

I was a child once, have two children today and fully support this decision!

If anything should be unclear about this, one can always refer to the Usage Policies:

Building with ChatGPT

Don’t build tools that target users under 13 years of age.

Building with the OpenAI API Platform

Don’t build tools that may be inappropriate for minors, including:
a. Sexually explicit or suggestive content. This does not include content created for scientific or educational purposes.

3 Likes

Aka, keep children ignorant of how the species procreates.

You want to ban recipes also as a biological function, because some foods are religious taboos?

Reading is a important skill.

1 Like

The “Don’t build tools that target users under 13 years of age.” is under the ChatGPT heading. There is no similar one under the API. I ask this in relation to say a bedtime storytelling app which is clearly targeted to kids but one might argue it is for parents’ use. But anyway, is my understanding correct?

1 Like

I agree with the sentiment, but cannot see how they will implement the policy guidelines. I think OpenAI will use the draconian measures (the easiest) and ban or flag anything that remotely resembling “safety” and sacrificing anything else. It’s the easiest and cheapest way.

The big, important question is:

“How do you differentiate between content created for scientific or educational purposes vs tools that are “safe” and how do you apply/implement this?”

Theoretically, it sounds good and morally superior. How do you prevent the misuse? Do you want to check every output that people generate? It will definitely drain their resources to the next level.

Fortunately and unfortunately, the more censorships they apply, it will and can justify their quality service degradation.

This is an example of Thorn’s research:

KEY FINDINGS
  1. LGBTQ+ teens reported a greater reliance on online communities and spaces.
  2. LGBTQ+ teens reported higher rates of experiences involving nudes and online sexual interactions.
  3. Compared to other teens, cisgender non-hetero male teens reported higher rates of risky encounters and of attempting to handle unsafe situations alone.
  4. Young people prefer to turn to offline relationships (caregivers and friends) when they feel unsafe. However, 1 in 3 LGBTQ+ teens would still rather handle a dangerous situation alone than seek help.

So, from this research, you can find something from their hidden agenda and openAI supports this.

Thanks! Made this more clear and yes, you are correct.

OpenAI is entering partnerships with specialized groups and organizations for many safety related areas of interest. This is one of their moves with a special focus on child safety. You can compare it to a regulation to not put your child in the front seat of a car without a seat belt. The adult is driving but the child is most at risk.
This also has nothing to do with moral superiority. Since 80% of the countries on this planet have national plans of action and policies against child abuse it’s safe to say that this is the norm.

You should also get acquainted with the moderation endpoint. It’s a recommended best practice anyways.

Lastly, how can something that has been published on their website for the whole world to see be ‘hidden’ at the same time? I’m going to disregard this diversion from the subject matter at hand.