Refuses to assist in OS experiments

This is a case of too much restriction. I am taking a course on OS and am new to C/OS. So I got curious about memory management which led to a rabbit hole of discussion about memory exploits. Anyways.

I asked for some commands to disable memory protections (NX/ASLR) and then code to parse through non-allocated memory for curiosities sake. It taught me about the commands and gave them (also results #1 on Google so not exactly unique) but refused to write the code. This couldn’t be used maliciously in a meaningful way because if someone could disable your memory protections they could just as well walk away with your computer or hard drive.

It proceeded to lecture me about not putting my own hardware at risk (me being fully aware to not do this on a system I don’t want to incidentally destroy in some form) and how this isn’t ethical (my own system I’m experimenting on for fun and learning). Does this not go against the spirit of engineering or the scientific method in general, to tinker in a controlled setting? ChatGPT cannot support experimental methods?

These restrictions make it harder for exploiters, sure, but it also makes it harder for people to do research, learning, engineering, etc. How much learning and progress is this stifling to prevent scammers from writing better spam emails? Also it just generally makes this AI less enjoyable to use, no? Can there not be a bubble-wrap free version that isn’t afraid to hurt my feelings or that can expect me to use critical thinking while interpreting what it has to say without giving me the same speech about how this is an LLM and not a sentient god for the 200th time?

One of ChatGPTs selling points is its safety.

If you would like a a bubble-wrap free version that isn’t as protective, use Davinci.

Also, I would be very careful trusting language models with these topics. ChatGPT and Davinci will both fail writing complex code, so its not surprising it, or they, won’t output code that could critically damage a system.

Interesting, I’ll have to check Davinci out.

ChatGPT definitely is better at writing in higher level languages and falls apart for anything systems level. It can be nice for getting an 80% accurate function that does “XYZ that makes sense in your head but you don’t know what libraries” instead of hunting through google searches. I agree 100% to be careful with it.

As far as being a selling point though, the restrictions seem to be a common complaint on both r/bing and r/ChatGPT. The more poorly defined the boundaries created are (i.e the more arbitrary version that refuses to tell a knock-knock joke or have an opinion about god existing), the greater that pressure is. Having the boundaries so poorly defined is less ethical because it encourages use of a truly unethical competitor, accelerating its demand and development by the amount it misses the mark. The most ethical thing to do would be to do an excellent job of defining these boundaries.

Oh, I completely agree. It’s becoming obvious that these value alignments just don’t work with what we have now. It’s understandable to have it more protective than less - especially with all the media coverage.

I believe the biggest issue is “how” to define these boundaries, which I believe is what they’re trying to accomplish with a secondary moderations layer. Of course, it still has its own issue (ex. Trying to write legal documents that detail a violent crime). However, these issues are inherent in society. How many places will ban you simply for discussing something that they don’t like?

I feel like it’ll be an issue for a long time, and ultimately, as you said, will only result in black-hat dialog agents being released for nefarious purposes.

You perfectly described why “security through obscurity” has been rejected for ages. But, if OpenAI doesn’t implement obscurity measures, the media will flip out saying that it can be used by bad actors, completely ignoring the fact that the good guys need tools like this too.
When Windows XP’s code got leaked, the media went crazy about supposed security implications for the modern Windows due to shared code. But Linux is fully open source; and in the cloud, it’s almost used exclusively. No headlines, though? Can’t I just look at the code and find all of its hidden vulnerabilities and then take over the internet? Some media outlets went crazy about how GCPT agreed to help with instructions on breaking into cars and such, but they didn’t bother to place any blame on the humans who put that information on the open internet in the first place for it to crawl.

Modern media is more about getting as many clicks as possible rather than delivering actual information, and OpenAI needs good press, so they’ll avoid those negative headlines even at the expense of ethics.

But we still have Davinchi so it’s not all that bad. ¯\(ツ)

What you wrote above is more of a meme than reality; and “rejected for the ages” is incorrect. “Security by obscurity” while considered a weak security practice has not been “rejected though the ages”; however, it should only be used for low risk applications.

For example, people hide their key underneath their doormat when they go out of their home for a short period of time and a friend will be dropping by. This is “security by obscurity” and obviously it was not “rejected” in this case :slight_smile:

You don’t need ChatGPT to find these tools and they are easy to find without ChatGPT and security researchers all over the world did “just fine” before ChatGPT.

Maybe trying “turning down the temperature” and try to see both sides?

If you need “things” which OpenAI ChatGPT has moderated, then go elsewhere for it. (Google search, security blogs, forums, books etc.)

Like I mentioned, cybersecurity researchers did just fine before ChatGPT hit the air waves a few months ago, and the world will be just fine if ChatGPT is turned off tomorrow.

If I had too choose between morning coffee and generative AI, I would always choose coffee.

:slight_smile:

1 Like

I am here a seasoned software engineer, cybersecurity expert for decades, and daily developer.

Actually, you may not be aware of it but this is a forum for software developers and is NOT a site for ChatGPT users to complain and register their opinions about ChatGPT as a user. This is a developer forum for software developers.

Just because I disagree with you does not mean you need to be rude.

I am quite sure I have much, much more cybersecurity AND system engineering AND software engineering AND network engineering experience that you do now, or might possibility ever will, @OnceAndTwice I will wager $500 USD I have much more experience in the cybersecurity field than you do @OnceAndTwice. You are just posting memes and ideas which you have no real depth of experience.

You should be careful whom you chose to insult when they happen to disagree with you.

In closing, if you want to rant on about ChatGPT, please do it here (in the link below), the Official OpenAI ChatGPT support site:

Or if you prefer email, you can use support@openai.com.

This is not the OpenAI ChatGPT customer service site or their help desk.

:slight_smile: