awesome_prompt = "this is a piece of Python that implements GPT5" completion = call_openai_codex(awesome_prompt) # A function that fix/lint/check the code is runnable processed_completion = lint_check(completion) execute(processed_completion)
Anyone tried online generate + execution?
If it makes sense, what about adversarial attacks if you run such thing in the real world?
Say, if Bob the hacker know the API is using a GPT model under the hood, and somehow engineer an API call that make execute a piece of code that for example make the server return its environment variables that contains the developer bank account credentials? (Kind of unlikely but a possibility )