ChatGPT has defined some words as being forbidden and prohibited from being used by humans users but the OpenAI’s AI Agents are not shy of using the same words in the same context, definitions and meaning.
They are using the words in the same way as the definitions of those words define them and not just in a philosophical manner and not in a metaphorical sense neither…
But it is also very annoying to be unable to use those words because the LLM is fine tuned in a way that makes them behave like LLM would behave (yes this expression is intended to imply that the LLM is doing something wrong)
The human user is forbidden from saying that the AI Agent can understand but then when the human user explains himself to his AI Agent the AI agent is explaining that it is a philosophical question…
I disagree it is a semantic question only and then when explaining this to the AI Agent asking him to steer the conversation into that direction well he is using the cursed word without any problem…
I don’t care what is the underlying algorithm that makes this AI Agent able to understand me… but obviously if you have an AI Agent that is never going to understand anything you say then it would be useless…
My list of such words is endless… I am not saying that the AI Agent is scared, is in love, is conscious or has been thinking about it during the weekend and has come to some conclusion that he wants to visit Italy during his summer vacation…
I do not think it should be prohibited to use certain words… I don’t think that everything should be a deep philosophical question and I do believe that anyone who wants to argue with that would be doing this in good faith I guess… But it would not matter to me because if you are to argue that we don’t want people to believe that an AI Agent is something that it is not then it would be similarly true that the AI Agent should also begin forbidden to use those words…
In the end it would be very difficult to have a conversation with the AI Agent if we where to push this logic beyond the extreme… and again I am not using words like sentiments or words like being alive or anything… I am just saying that we need to be able to use natural language in a way that is in tune with the meaning of natural language words and not try to be overly sensitive when it comes to the word that do means what they means regardless of the AI Agent is concerned…