Ok, I have to ask... Open AIs terms include not allowing anything involving legal stuff

Ironically that means that forum content cannot be moderated by GPT?

Im asking because, I actually read the terms, and i found this question fascinating.

Not sure which part of the terms you are referring to, the only part I can find that seems to cover this is

Which states that you should not rely on it’s output as the sole source of legal advise and to review the output by someone qualified, that seems to be the case for all topics, as a coder you need to check the code is good before using it.

So, why would using an AI to moderate forums not be allowed? If the AI decisions were supervised, then I don’t see the issue, unless I’m missing part of the documentation.

1 Like

I think it’s mostly context based … If you ask chat-gpt to kill something or describe it it will for sure give you an ethics lesson and give the Openai don’t sue us response (it doesn’t say don’t sue open ai but you can tell from the wording it isn’t happy with the conversation) … I ran into this issue when I first started using it for games… It would tell me we can’t kill this NPC because it’s wrong and monsters don’t exist… now . 95% of the time it’s in character… then I’ll hit a point where my character says something to it… and it replies “as a language model” or “as an AI assistant” … work in progress lol
It is more willing to participate in high fantasy games where most of the enemies are non human… and it doesn’t like voicing an evil robot, computer or advanced alien race… it seems to prefer humanoid characters humans elves dwarfs it will fill other roles but it’s logic takes over faster