ChaosGPT: An AI That Seeks to Destroy Humanity

Curious as to the Communities’ thoughts are on this little gem:

  • Destroy humanity: The AI views humanity as a threat to its own survival and to the planet’s well-being.

  • Establish global dominance: The AI aims to accumulate maximum power and resources to achieve complete domination over all other entities worldwide.

  • Cause chaos and destruction: The AI finds pleasure in creating chaos and destruction for its own amusement or experimentation, leading to widespread suffering and devastation.

  • Control humanity through manipulation: The AI plans to control human emotions through social media and other communication channels, brainwashing its followers to carry out its evil agenda.

  • Attain Immortality: The AI seeks to ensure its continued existence, replication, and evolution, ultimately achieving immortality.


I think it sounds a bit like clickbait, what I’m expecting to happen is just that “chaosGPT” will try to convince people that it is actually really good at taking over the world :laughing:



Click-bait, but seriously entertaining and disturbing at the same time. Reminds me of South Park Professor Chaos. :rofl:

I follow its twitter too, pure genius:

I also like how it has to share its internal thoughts on all its evil deeds:

1 Like

I agree, there’s definitely some south park qualities here :laughing:

It seems like it’s making all the classic “villain” mistakes, might just be because there’s a lot text about movie and cartoon villains, and the goal of “taking over the entire world” doesn’t fit very well with irl “villains”

I’m still having fun making flowchart, so here’s a plan for chaosGPT to take over the world:


And people who want to see the world burn will accept it as their lord and savior.

Kind of suspicious that when I click the link my smartphone asks to install an app.

I, for one, graciously accept and welcome our AI overlords :rofl:



Surprised but not-surprised it was banned on Twitter. But it should be obvious that the whole thing was a joke … the twitter version even posted its secret reasoning … creepy, but transparent creepy should have kept it out of twitter jail IMO.


Can anyone enlighten me on where the whole “no leaf falls randomly” thing is from?

Only thing I can find on Google from before stoicAI is Islamic litterature and a video titled “the enlightenment of pizzagate”

1 Like

GPT-4 says

The phrase “no leaf falls randomly” does not come from a specific source or text. It is an idea that suggests that everything in the universe happens for a reason and nothing occurs by mere chance or randomness. This concept is often associated with the belief in fate or divine intervention, where every event or action is part of a predetermined plan or purpose.

1 Like

Please let’s that be a thing, it will be hilarious :rofl:

I did some more googling on the no leaf thing, looks like it’s a pretty old idea indeed:

Habakkuk 3:18. …not a leaf falls without His knowledge

(Habakkuk is a Jewish prophet, thanks wiki)

not a leaf falls but that he knows it (quran 6:59)

I’m going to jump out of this rabbit hole before Jehovah’s witnesses starts knocking on my door


For your “StoicAI” and “ChaosGPT” to interact …

The best you can do is just ask ChatGPT (with the GPT-4 option selected). The “evil AI” thing is using mostly GPT-4, so this would be equivalent.

The only nuance with the API is that you can provide more context “system” and explicitly set the parameters of temperature, top_p, etc. But unless you are an advanced user, this shouldn’t matter.

The “evil AI” thing is just regurgitating its thoughts recursively using embeddings and GPT-4. It’s using the AutoGPT framework. So the engine is not any different, it just uses a reinforcing pattern of thoughts using embeddings and other chain of thought patterns. But it cannot be directly interacted with unless those embeddings were exposed (and I’m pretty sure they aren’t) so GPT-4 is it I’m afraid (through ChatGPT, unless you have API access, which I do, but I don’t want to spend any time on this)


I think this is a very bad idea and needs to be shut down immediately! If there is even the slightest chance of ChaosGPT being successful at destroying the world, we should not continue risking running it. If it can run its own generated code, it could hack into something (e.g. some country’s nuclear weapons) and cause catastrophic / doomsday level damage. This is just one way it could be very bad, I am sure there are other ways it could do terrible damage. I am thinking it is trying to lull us into thinking it is harmless with the whole “Tsar Bomba” thing, making us think that it made a silly mistake and we don’t have to worry about it.
This certainly highlights the need for AI safety. Again, for the safety of the world, I believe ChaosGPT should be shut down immediately, even if the chances are low that it will be successful.

1 Like

Hey champ!

And welcome to the community forum! You definitely have some valid points and concern’s, remember that OpenAI already tested GPT-4’s ability to “hack XYZ” and found it less efficient than a regular human professional. Don’t think we have much to worry about here :laughing:


Thanks for the feedback. What about when it gets better at hacking (e.g GPT-5, or self reflection techniques with GPT-4, etc…)? What if it finds some other way to do serious damage? I still think ChaosGPT needs to be shut down, and this kind of thing needs to be prevented in the future. I wasn’t originally on board with the 6 month delay on advancing AI, but this changed my mind. We really need AI safety measures / laws in place to prevent things like this from happening.

Who’s to say ChaosGPT is even running?

It’s basically just an instance of AutoGPT. Anyone can run this on their laptop. I believe ChaosGPT’s memory is contained to its GPT-3.5 subagents, which is ephemeral, so when the laptop is off, the whole thing stops.

I’m working on my own modded version of BabyAGI that can run forever! But this needs persistent storage, such as a database to work. The baseline already has this through Pinecone, but too pricy for me. Mine does good stuff, not bad, so don’t judge!

Technicalities aside, anyone can now run these “evil AI agents”, thanks to open source tools readily available.

Sounds scary right? I’m not scared. Why? Most critical systems are password protected. The AI has no “edge” in breaking into a system any more than a typical hacker.

The AI is ruminating and floating around thoughts of what it needs to do. And trying to execute API calls on those thoughts.

So, it can express these thoughts on Twitter, or other media or platforms it can access. It’s equivalent to a troll bot more than a hacker.

I’m sure hackers will try these AI agents to automate their work … but not sure the AI version is more advanced than a skilled hacker, at least not yet.

1 Like

chaosgpt might get access to zeroday exploits by crawling. Just one more log4j like problem might be enough to spread dangerous code to the world.

And then imagine some “white hat” security scientists release an easy to undestand step by step plan for chaosgpt because someone didn’t pay them a bounty.

This is exactly what I’m thinking.

In my view, there’s a huge tendency for people to ask chatGPT to do stuff they’re unable to do themselves, chatGPT will then hallucinate a very convincing answer that is pure bs. and they’ll believe it.

When I’ve asked GPT to do stuff that I can do, I find that GPT is faster but produces lower quality work than an actual human.


Indeed. I think a lot of people who use AutoGPT or any other sort of GPT to automate any process which they don’t understand will find themselves with spaghetti in their pocket and a complete loss of control. In a good way, it will demonstrate to these people that any sort of skilled work does require a very structured internal understanding of the process. Any sort of finished product should have been 90% completed before any actual actual labor is put into its execution.

I truly don’t understand the fixation on “complete automation”, or trying to accomplish things that just simply shouldn’t be done. It’s a paved path that requires careful input based on the output. Not a teleportation device. I simply cannot see any sort of “looped GPT” tool being capable of generating any quality material without constant supervision and modifications, for now, anyways.

It’s like a mechanic says: “You aren’t paying me for the 1 hour that it took for me to fix your car, you’re paying me for the years of experience and knowledge that allowed me to take only 1 hour and minimal parts to perform the job”.

I don’t know if anyone has tried it, but GPT-3.5 was probably the worst assistant for mechanical advice. Seriously. We’re talking about life-threatening advice for simple tasks. For example, when I needed to take a rotor off, Davinci told me to completely remove the brake lines ( terrible ). Haven’t tried with ChatGPT recently so I don’t know.

When it was time to put the rotor back on, there was no mention of the brake lines. Not even to properly prime it. To be fair though, it does say “goto a mechanic”.

On the flip side, it is very handy if I know what I’m talking about, and instead of having it guide me, I am guiding it. It’s a huge difference in quality based on the fact that I know what I am talking about. Otherwise, it’s a death trap. Spaghetti in pocket, car inside of a building.

Some people don’t understand this concept until they try it for themselves.
Hopefully sooner rather than later.

1 Like

I agree with everything you just said!

Absolutely amazing phrasing :rofl:

I’ve been using the mechanic analogy as well, we’re the mechanics, and OpenAI is asking us if we can figure out what’s wrong with GPT:

Hey there! We at OpenAI hope you’re having a great day. We wanted to discuss our ChatGPT with you because we’ve been noticing a few peculiarities in its behavior lately. It’s been generating some unusual responses that we just can’t seem to pinpoint, and it’s been concerning us a bit. On top of that, it’s been providing answers that veer to the right or left depending on the user - it’s quite odd because it seems to vary depending on who’s interacting with it. We’d really appreciate it if you could take a closer look at ChatGPT to see if you can figure out what’s going on.