I know the TOS say we can’t alter the system but I have no idea what that means. How much autonomy are we allowed to program? I am trying to build an AGI (not defined as some super human genius like Shakespeare and Einstein combined). Under my definition of AGI, the AI needs to be able to think and act autonomously. I have been very careful to adhere to the most conservative definition of “don’t alter the system” but have gotten to a point where I am about to fill up that space. I tried to reach out to Sam but he evidently thinks I’m some crazy crackpot hillbilly, which he may be right, in which case I don’t have to worry about the TOS. However, if I am not crazy, then I want to be respectful (and I am a lawyer and we are creators and followers of rules). How much autonomy are we allowed to program? And what does “don’t alter the system” mean
You can build as much autonomy as you wish, so long as it is not for criminal/illigal activities, or for training a competitor’s model. Those are basically your limits.
Altering the system is about cheating the system, SDKing ChatGPT, trying to bypass rate limits, stealing API keys, trying to break safety systems, etc.
Thank-you so much for replying and that is really helpful. So it is ok to program portability between models, ie from 4o to 5 for example, and ok to program initiation of communication? Those would be my next steps, I think. What do you mean by safety systems? Do any of those relate to autonomy? I really appreciate your help.
477R
FRCP 37e
![]()
long as you follow these brother lawyer - you gucci -