Legalities of self improving agents

What is the legality of building agents that improve themselves by consuming ChatGPT’s output? These agents will query ChatGPT many times to solve complex problems. OpenAI’s legal policy appears to prohibit the creation of ‘models’ by using ChatGPT. The ‘model’ won’t be the same thing as a transformer architecture so it doesn’t directly compete with OpenAI. Has this issue arisen with AutoGPT or it’s competitors?

You’re choosing a very cherry-picked definition of what it means to “compete with.”

By the terms of service you cannot use the output of the OpenAI models as training data for another commercial model.

The underlying architecture isn’t the issue—the model people might choose to pay you for access to rather than paying to access an OpenAI model is the issue.

Interesting. I have little interest in building a transformer based architecture model like GPT-4. I am solely interested in building agents. These will in fact query transformer based architectures. So OpenAI will profit as I will be querying LLM’s like their own. Is there anyone from OpenAI that can clarify their legal stance on this? Does it compete? Is it even a ‘model’ where it is a prompt/code management system and not a neural net at all? Is it legal for agents like AutoGPT to improve themselves by querying OpenAI?

There appears to be some confusion here, OpenAI’s policies doesn’t allow you to use the output from their models to create a competing AI model.

You’re not allowed to programmatically interface with chatGPT either. But you’re good to go
if you want to develop something that queries GPT through the API a bunch of times to:

I hope that helps clear things up :laughing:


N2U, I thank you for your priceless advice. I am using the API.

You’re welcome, remember to use the moderation endpoint though so you don’t get yourself banned if you’re agents start going off track :sweat_smile:

I’d give you a heads up btw, you’ll get much higher success rate at a much lower cost by just writing your own, here’s a simple example:


Ya, I’ve been writing my Goals project by hand. I’ve got a Goals agent running successfully for simple tasks, including writing, compiling, and testing code. I’m adding multi-agent support now. It should eventually be able to look at it’s own codebase to make improvements, but that is where the legal questions will arise. I’m not going to do that until I can get some legal direction on that issue. The issue of it writing plugins, compiling them, and plugging them into itself to expand it’s capabilities raises the same type of legal question.

Closing this topic as it has a marked solution.