Hello all!
I have a question I hope you can resolve. I’ve built a semi-autonomous cyber security audit ChatGPT plugin. It can scan for code vulnerabilities, run standards compliance, give info and training on best practices, and compile reports.
My ultimate concern is that this is a pentester and I don’t think it’ll be let in the store…however when used properly, I think it’s pretty great. I’d like to find a way to release it safely
I know of at least one plugin with this kind of end use in mind, similar not for “hacking” usage, it was rejected. I think the optics of an AI “hacker” is bad, and regardless of your intentions, the tabloid media would flag it as such.
I totally agree that AI augmented pen testing IS going to be a thing, but I think it is also worth considering this is AI’s tentative baby steps into the mainstream and this is going to be a hot potato topic for some time.
Yeah I agree. Its tricky, because quite frankly it doesnt do much more than can already be done with ChatGPT if you know what you’re doing. I just created a fun and autogpt similar UX.
Maybe I can run everything through the moderation api, but I don’t see how it would catch anything
I think Ill reach out to Logan first and ask what he thinks.
1 Like