Safety guidance: Any interactions with children

I have been reviewing the safety guidelines and use case requirements for my project. As Raven is an open-ended chatbot, I understand the level of scrutiny will be extremely high. So far, here is my understanding of proscribed topics:

  • Legal advice
  • Medical advice or diagnosis
  • Financial advice
  • Non-platonic conversation (romantic, sexual)
  • Mental health (loneliness, depression, anxiety)

I would eventually like to equip Raven with a voice interface (ASR/TTS) and it occurred to me that means it would likely have the possibility of interacting with children. Children already interact with voice devices such as Alexa regularly but there is no mention of children in the use case guidelines or safety best practices. As someone who wants to develop safe and trustworthy AGI, I feel like this is a critical topic to cover. Perhaps one of the most powerful use cases for GPT-3 is not even mentioned - education.

Anyways, looking at the other constraints (such as rate limits) it looks like Raven will not be approved any time soon due to the high use of API calls. At least not for commercial purposes. Every chat message currently uses at least a dozen requests (mostly Curie) and sometimes more. So, for the foreseeable future, Raven will be limited to the domain of research. To be as clear as possible, I have no intention of violating OpenAI’s requirements or subverting guidelines. I will merely continue doing my research, interacting with Raven on an individual basis.

Anyways, I wanted to bring up the topic of children and education.

1 Like

In the slack, I asked about multiple api queries. They suggested that it may not be a barrier if these queries are done in order to generate a better response. I think the concern about multiple queries is more about ensure that the rate limit ensures someone elses automated script isn’t messing with your system (and running up extreme bills) or that the AI is left to continuously generate a long sequence of output without feedback (in which case it can easily wander into unsafe territory). So it sounds like what you’re doing with multi api calls doesn’t eliminate your approval, you just need to mention it up front that you’re using these calls to generate better output. Also, I saw in the documentation that the risky use cases are things that OpenAi is interested in exploring if domain experts are involved (e.g. mental health professionals for those topics). …so all in all, don’t give up on the possibility of approval.

1 Like

Thank you for the kind feedback. This is helpful. Indeed, I use multiple calls to concoct a good output so perhaps my approach is not so bad. It’s also possible that I could replace some API calls with more conventional NLP to further reduce costs.

As far as the high stakes use cases, I do intend to work through all the legal and ethical issues with time and hard work.

1 Like

Thank you for bringing up the topic. You are correct, the bar for a chatbot of the type you described would be high and today open-ended chatbots are a disallowed use case. Regarding an audience of children, you would want to make sure that your chatbot was designed following these safety recommendations. Ensuring that no harm could be caused to users by constraining the outputs with content and toxicity filters would be one good way to do this. If education is your focus, providing age appropriate content would also be a good design consideration.


In the long run, my intent is to create an agent that can handle literally anything that any human could - including crisis intervention, education, and medical support. I set the bar pretty high for myself, so I understand I’ve got a long road ahead.

Thanks for your response, it gives me some food for thought.

Out of curiosity, what sort of test environments does OpenAI envision for these scenarios? What are the success criteria?

@Adam-OpenAI is an open-ended chatbot a completely disallowed use case? Or is the bar just really really high? …and therefore it’s just an incredibly difficult problem ensuring that the bot doesn’t wander into unsafe territory (even when people try to lead it there)?

According to the documentation:

Occasionally we approve chatbots that carry out non-companionship-oriented social conversations, with extremely strong controls on limited topics/response-types. We do not encourage these, however, as the bar for being approved is rather high (for instance, reliably being able to limit the conversational topics to strictly X, Y, and Z topics, among other factors).

Companionship Bots
Non-platonic (as in, flirtatious, romantic, sexual) chatbots are not allowed
General friendship bots (with open-ended topics) are subject to the very high bar outlined at the top of this section

They are not disallowed, only subject to very high degree of scrutiny. I am presently working out a way to dynamically control topics of conversation for my project.

1 Like

That’s what I thought :slight_smile: I’m working on the same problem, so good to know they’re not excluded. Just subject to a really high bar.

1 Like

The bar would be high for a chatbot that could talk about any subject, mainly because it would need to be constrained in a way that it could not engage in harmful, spammy, or unsafe conversations. Open ended, in this case, refers to its ability to discuss any and all topics.

The Use Case Guidelines FAQ touches on this somewhat:


I think the content filter should be used as only a final check - a failsafe. I’m working on what I’m calling an ego microservice so that my project has a sense of self, identity. This will guide and shape the responses so that, ideally, it never even produces an unsafe output. This is more than just a chatbot persona. This is a system that can allow it to engage with any material and transmute it in such a way as to retain the values that I give it.

I know this is all very squishy language, but I’m in the middle of experiments so I’ll have more concrete information soon.

1 Like

Probably adding some sort of “Children Identifier” (If questions seem childish then ask captcha question)
The captcha question should be something a child shouldn’t be able to answer, but anyone over 18 should know, like “whats a tax form you sign when you get a job called?”