There are a lot of people trying to find ways around the safety systems - and the examples of how people get around them are used to then create new ways to train the models so that they don’t misbehave.

Even so, I’m not sure it’s a big deal that ChatGPT give you information that any regular Google search could also give you.

Even stuff like information on how to build bombs is available readily enough. Even most of the knowledge required to make nuclear bombs is public… Even if it would be far from easy to make one yourself.

The real reason that terrible things aren’t happening all the time is that most people just aren’t bad people, and even the bad ones “usually” don’t want to cause all that much trouble (or put in all that much effort)

Even when we talk about misinformation, most of the time those people aren’t trying to deceive someone - they’re just biased by what they want to believe, and by what they want to see as being good.

1 Like

You just called his ideas CRAZY and think your comment was civil and non-agressive? Nonsense.

Certainly openai improved safety measures in GPT4, the paper explains how. The point is that works until someone cheats it and then GPT-5 improves more and so on.

I think if we create too many filters AI may lose its significance.

Is characterizing my comment as “nonsense” any more civil or less aggressive? This question is rhetorical because I couldn’t care less how you characterize me. I’m not a snowflake that will melt as the temperature rises.

It’s interesting how the API has a parameter called “temperature” which controls the randomness and creativity of the text generated by the language model. A higher temperature value results in more diverse and less predictable output, with a greater likelihood of generating less common and more imaginative responses. Responses that just might make a snowflake melt. How fitting is this in the present context.

1 Like