We’re using ChatGPT 3.5 through the python API to develop a “Help Desk” chatbot.
We want our users to not realize that they are interacting with a chatbot, but depending upon what the user types, ChatGPT 3.5 sometimes generates responses which contain disclaimers such as “As an AI Language Model …”, or “I am not programmed to …”, other similar text which alerts the user to the fact that they are dealing with a chatbot.
There are numerous variations to the wording of these kinds of disclaimers, and it is impossible to come up with regexes and other forms of pattern matching which can identify each and every one of them and eliminate or replace them.
Is there some way to reliably (!!!) get ChatCPT 3.5 to stop identifying itself in its responses as an AI language model and stop responding with these kinds of disclaimers? If a user’s prompt can’t be responded to, then a response with something like “I don’t know,” or “Can you explain that further?”, or “I’m sorry, but I can’t help you. Please contact [contact info for real person],” or similar things would be acceptable.
Any ideas, thoughts, or suggestions? Thank you in advance!