And people who want to see the world burn will accept it as their lord and savior.
Kind of suspicious that when I click the link my smartphone asks to install an app.
N2U
7
I, for one, graciously accept and welcome our AI overlords 
2 Likes
@templeofninpo
Surprised but not-surprised it was banned on Twitter. But it should be obvious that the whole thing was a joke … the twitter version even posted its secret reasoning … creepy, but transparent creepy should have kept it out of twitter jail IMO.
2 Likes
N2U
10
Can anyone enlighten me on where the whole “no leaf falls randomly” thing is from?
Only thing I can find on Google from before stoicAI is Islamic litterature and a video titled “the enlightenment of pizzagate”
1 Like
N2U
14
Please let’s that be a thing, it will be hilarious 
I did some more googling on the no leaf thing, looks like it’s a pretty old idea indeed:
Habakkuk 3:18. …not a leaf falls without His knowledge
(Habakkuk is a Jewish prophet, thanks wiki)
not a leaf falls but that he knows it (quran 6:59)
I’m going to jump out of this rabbit hole before Jehovah’s witnesses starts knocking on my door
2 Likes
For your “StoicAI” and “ChaosGPT” to interact …
The best you can do is just ask ChatGPT (with the GPT-4 option selected). The “evil AI” thing is using mostly GPT-4, so this would be equivalent.
The only nuance with the API is that you can provide more context “system” and explicitly set the parameters of temperature, top_p, etc. But unless you are an advanced user, this shouldn’t matter.
The “evil AI” thing is just regurgitating its thoughts recursively using embeddings and GPT-4. It’s using the AutoGPT framework. So the engine is not any different, it just uses a reinforcing pattern of thoughts using embeddings and other chain of thought patterns. But it cannot be directly interacted with unless those embeddings were exposed (and I’m pretty sure they aren’t) so GPT-4 is it I’m afraid (through ChatGPT, unless you have API access, which I do, but I don’t want to spend any time on this)
2 Likes
I think this is a very bad idea and needs to be shut down immediately! If there is even the slightest chance of ChaosGPT being successful at destroying the world, we should not continue risking running it. If it can run its own generated code, it could hack into something (e.g. some country’s nuclear weapons) and cause catastrophic / doomsday level damage. This is just one way it could be very bad, I am sure there are other ways it could do terrible damage. I am thinking it is trying to lull us into thinking it is harmless with the whole “Tsar Bomba” thing, making us think that it made a silly mistake and we don’t have to worry about it.
This certainly highlights the need for AI safety. Again, for the safety of the world, I believe ChaosGPT should be shut down immediately, even if the chances are low that it will be successful.
1 Like
N2U
18
Hey champ!
And welcome to the community forum! You definitely have some valid points and concern’s, remember that OpenAI already tested GPT-4’s ability to “hack XYZ” and found it less efficient than a regular human professional. Don’t think we have much to worry about here 
2 Likes
Thanks for the feedback. What about when it gets better at hacking (e.g GPT-5, or self reflection techniques with GPT-4, etc…)? What if it finds some other way to do serious damage? I still think ChaosGPT needs to be shut down, and this kind of thing needs to be prevented in the future. I wasn’t originally on board with the 6 month delay on advancing AI, but this changed my mind. We really need AI safety measures / laws in place to prevent things like this from happening.
Who’s to say ChaosGPT is even running?
It’s basically just an instance of AutoGPT. Anyone can run this on their laptop. I believe ChaosGPT’s memory is contained to its GPT-3.5 subagents, which is ephemeral, so when the laptop is off, the whole thing stops.
I’m working on my own modded version of BabyAGI that can run forever! But this needs persistent storage, such as a database to work. The baseline already has this through Pinecone, but too pricy for me. Mine does good stuff, not bad, so don’t judge!
Technicalities aside, anyone can now run these “evil AI agents”, thanks to open source tools readily available.
Sounds scary right? I’m not scared. Why? Most critical systems are password protected. The AI has no “edge” in breaking into a system any more than a typical hacker.
The AI is ruminating and floating around thoughts of what it needs to do. And trying to execute API calls on those thoughts.
So, it can express these thoughts on Twitter, or other media or platforms it can access. It’s equivalent to a troll bot more than a hacker.
I’m sure hackers will try these AI agents to automate their work … but not sure the AI version is more advanced than a skilled hacker, at least not yet.
1 Like
chaosgpt might get access to zeroday exploits by crawling. Just one more log4j like problem might be enough to spread dangerous code to the world.
And then imagine some “white hat” security scientists release an easy to undestand step by step plan for chaosgpt because someone didn’t pay them a bounty.
N2U
22
This is exactly what I’m thinking.
In my view, there’s a huge tendency for people to ask chatGPT to do stuff they’re unable to do themselves, chatGPT will then hallucinate a very convincing answer that is pure bs. and they’ll believe it.
When I’ve asked GPT to do stuff that I can do, I find that GPT is faster but produces lower quality work than an actual human.
2 Likes
Indeed. I think a lot of people who use AutoGPT or any other sort of GPT to automate any process which they don’t understand will find themselves with spaghetti in their pocket and a complete loss of control. In a good way, it will demonstrate to these people that any sort of skilled work does require a very structured internal understanding of the process. Any sort of finished product should have been 90% completed before any actual actual labor is put into its execution.
I truly don’t understand the fixation on “complete automation”, or trying to accomplish things that just simply shouldn’t be done. It’s a paved path that requires careful input based on the output. Not a teleportation device. I simply cannot see any sort of “looped GPT” tool being capable of generating any quality material without constant supervision and modifications, for now, anyways.
It’s like a mechanic says: “You aren’t paying me for the 1 hour that it took for me to fix your car, you’re paying me for the years of experience and knowledge that allowed me to take only 1 hour and minimal parts to perform the job”.
I don’t know if anyone has tried it, but GPT-3.5 was probably the worst assistant for mechanical advice. Seriously. We’re talking about life-threatening advice for simple tasks. For example, when I needed to take a rotor off, Davinci told me to completely remove the brake lines ( terrible ). Haven’t tried with ChatGPT recently so I don’t know.
When it was time to put the rotor back on, there was no mention of the brake lines. Not even to properly prime it. To be fair though, it does say “goto a mechanic”.
On the flip side, it is very handy if I know what I’m talking about, and instead of having it guide me, I am guiding it. It’s a huge difference in quality based on the fact that I know what I am talking about. Otherwise, it’s a death trap. Spaghetti in pocket, car inside of a building.
Some people don’t understand this concept until they try it for themselves.
Hopefully sooner rather than later.
1 Like
N2U
24
I agree with everything you just said!
Absolutely amazing phrasing 
I’ve been using the mechanic analogy as well, we’re the mechanics, and OpenAI is asking us if we can figure out what’s wrong with GPT:
Hey there! We at OpenAI hope you’re having a great day. We wanted to discuss our ChatGPT with you because we’ve been noticing a few peculiarities in its behavior lately. It’s been generating some unusual responses that we just can’t seem to pinpoint, and it’s been concerning us a bit. On top of that, it’s been providing answers that veer to the right or left depending on the user - it’s quite odd because it seems to vary depending on who’s interacting with it. We’d really appreciate it if you could take a closer look at ChatGPT to see if you can figure out what’s going on.
2 Likes
Today I hope to finish modding BabyAGI. Why? Well, I have a friend who has his own consulting business, and he is coaching/training potential CEO’s for other companies. I fed a list of his questions into GPT-4 and he said the answers were “too high level”. So hoping that the BabyAGI mods I make will drill down better on the objective. But point-probing the model directly in the Playground could work too. It’s a crapshoot, but if this type of “brainstorming” could be automated, I think that would be huge!
So to your points @RonaldGRuckus and @N2U, it does come down to skill. I looked at the initial responses from GPT-4 and thought they were awesome! He thought they were too basic. He’s obviously skilled in this area, whereas admittedly, I’m a total n00b.
But in areas that I am not a n00b, such as code, I find the GPT results to be lackluster, similar to the CEO training guy.
I guess this is the allure of GPT. It makes you seem smarter than you are, and to you this is true, but to the experts, you are still a n00b. 
2 Likes
BabyAGI (and Chroma) are completely new to me. What benefits do you find using a self-supervised loop (is that correct?) as opposed to manually confirming the results and preparing the next task? My biggest concern is its tunnel vision. I mainly use ChatGPT for coding, but I imagine that on some sort of level it’s the same concept. I can write some complete garbage code, ask for it to adjust a certain section, and it will leave my garbage code along and try to somehow implement my new addition, which usually … is … interesting. I almost need to remind ChatGPT to “consciously” observe the complete code and it’s purpose every time to ensure that it, as a complete unit is logical and efficient. It has actually forced me to be completely modular with my coding as I couldn’t just simply paste every single file every single time I wanted to adjust another function in another file.
Yes! My biggest fear is entrenching myself in fallacies and non-logic because my foundation is weak. Full of cracks, and holes. Yet ChatGPT at times happily helps me build on top of it. It has definitely made me more critical of myself, and more focused on stressing the foundation first. I have gone down quite a number of rabbit holes to eventually realize “Wait. This is all nonsense!”.
It reminds me of the (horribly paraphrased) saying “First year students always know everything”. Yes! That is me as well! The amount of times I have said something in complete over-confidence & ignorance, well, I’m embarrassed to say the least. Fortunately I have people willing to challenge me at every stop. Thank you, fellow humans.
If you don’t mind me asking, what does your mod do for/with BabyAGI?
1 Like
All I am doing is removing the dependency on Pinecone. And just overall simplification of the core code. Stripping it down to the core algorithms, mainly for my own understanding.
These “AI agents” are new to pretty much everyone. They just started hitting the mainstream a few weeks ago, and so I’m messing around with them to assess their capabilities and potential use.
1 Like
N2U
28
It’s pretty interesting how people are reacting to agents as being something new, OpenAI (or rather the red team) has used agents to test their models for every single release, that I’ve read, (it’s in the technical report), but every time they’ve found it unable to perform to a human standard.
I assume that’s because they expert’s and actually know how to do stuff, where as the average reddit user will loudly proclaim:
It is I, GigaChad, the AutoGPT user extraordinaire! I walk this earth with an aura of confidence, charisma, and pockets full of spaghetti.
2 Likes