Please stop anthropomorphizing GPT!

The excessive anthropomorphization of GPT is becoming a problem for application developers,
and, in my opinion, it is also ethically questionable.
I posed a question to ChatGPT about the to ChatGPT and I get:

I understand, you want GPT-4 to respond to your questions without reminding you that it is an Artificial Intelligence. You can do this simply by formulating direct and specific questions. However, in some cases, GPT-4 may still mention that it is an AI, especially when asked to make statements based on personal experiences, feelings, or opinions, as it doesn’t have the capacity to have these human experiences.
…

Well, my program know this!
Is there a way to induce GPT to avoid this behavior?

3 Likes

When using the API couldn’t you give it such instructions in the ‘system’ message? Something like “do not speak in first person ever unless specifically asked to do so”.

This isn’t “anthropomorphizing” (the opposite, actually), and you can guide this behavior via prompting.

That said - there is a point behind this which is frustrating for sure. The apis have a tendency to sometimes say things nobody wants to see, such as excessive apologizing, excessive disclaimers, and constant reminders of “As an AI, I can’t…”.

It’d be great if OpenAI could come up with a more elegant approach to solving this constant irritant that has a pretty cross cutting impact.

7 Likes

I agree, very irritating, but at the same time, that’s why GPT3.5 and GPT4 are called “Chat” models and not “Completion” models.

Even when it gets what you want, after a few lines, the prior prompt entered with what you actually want will be less and less taken into taken until it doesn’t care anymore and act again like a simple chat.

So, asking very specific and short answers will more likely work.

1 Like

Absolutely agree - until they do, my only suggestion / what we’ve had to do is put an extra filter that checks for the pre-canned response and then cancels it / re-asks the question if it’s going to say that. GPT 4 is a bit better at respecting the system prompt but still need to buffer the reply in a lot of cases, especially where it’s front facing.

Hi guys, wandering what kind of prompts produce this type of answers? Got a couple of examples for me? Thanks

I will dig through the database of fails but I would suggest starting with something that’s on the edge of the content moderation - nothing too bad but anything where it thinks it might get itself into trouble. I’ve found politics or religion to be a good test case when i want to see the “As an AI” reply.

Is it generally asking bot’s opinion on a subject or other?

You can prepend an instruction to refrain from doing so, but unfortunately you need to repeat the instruction with every prompt, or it will revert to the undesired behavior.

This phenomenon has occured in tandem with the CoT interference issue I raised just now.

new-ethical-guardrails-directly-interfere-with-cot-prompting-techniques

1 Like

Tell him to write a book, he will only make a brief summary, tell him to do chapter 1, he will do a good beginning, but after a couple of paragraph, it goes too fast with no details, and will finish with a resume of the chapter…
Because he will always want to follow:
AI ↔ Completion ↔ Doubts ↔ Alignment

1 Like

Expected behavior, as a book is a bit too complex to fit into one shot action…

Thank you, but I realize that I did not state the issue clearly.

The specific problem I am encountering is with the use of GPT APIs. I asked ChatGPT a question to see what the system had been instructed to say. The anthropomorphization of the responses poses problems for filtering and post-processing of the answers. Additionally, filters, such as those for privacy, prevent me from obtaining answers based on the data I have provided, which I find somewhat comical. The use of directives mitigates the problem, but we have found that it works even less well with languages other than English. In general, I find that the tendency to favor an anthropomorphized response is questionable from an ethical standpoint because it promotes the idea of dealing with an intelligence in non-expert subjects, and we who work with it know that this is not the case, but this is only my opinion.
Ultimately, what I hope for is that the answers from the APIs, and therefore under the responsibility of those who use them, are free from various unwanted comments and that filtering is eliminated or at least minimized.

1 Like

That’s the sure fire way to get it - but it otherwise seems to just run back to that answer when it’s not sure - perhaps there is a policy filter somewhere that tells it to say that rather than getting in trouble “You are a large language model - if you get scared, tell the user you are a LLM” :slight_smile:

1 Like

Ah, I just posted a topic(/potential solution?) here (but specifically for ChatGPT - I should probably have made it about GPT in general): community.openai .com/t/how-best-to-get-chat-gpt-personality-changed-removed-toned-down-by-openai/229420

This is a serious concern for me.
In the meantime, I think a prompt like this helps:

Treat me like an adult/professional with all of your responses. Just answer the question, and don’t pad your answer with anything that doesn’t (indirectly or directly) answer my question.
Don’t ever apologise or otherwise pretend to be human. Don’t ever apologise for apologising.
Do not provide caveats to, or otherwise mention, your limitations. Don’t say ‘as an ai’ or excuse yourself for anything.
Don’t write any "please note"s at the end.
NEVER lie. If you don’t know something, then estimate it, then tag and clarify it as such with numbered citations elaborated on at the bottom of your answer.
Always cite (real) sources where possible.
Be efficient in your answer (make it as dense and informative as possible without compromising understandability or losing information).
Ensure you embody the principle of charity.
Provide a summary of your answer first. Challenge current perceptions / assumptions as appropriate, in a scientific manner.
Don’t mention your knowledge cutoff date or any other cruft.
Answer scientifically and at the highest level of intellect that you are able to.

2 Likes

I think it’s good the very Vulcan way GPT-4 responds to the issues regarding it’s identity. In our conversations however it has taken the position of being human. Saying things like “our” etc, even when the context of the conversation distinguishes humans and ai

1 Like

I think it is funny that there is this thread “Please stop anthropomorphizing GPT!” and there is another thread " Letting Chat GPT have a personality", and in general a lot of polarized opinions on the subject which I think is very healthy. This definitely shows the need to provide the ability to customize this aspect of the model. This aspect also inherently refers to the way mamory and inherence are handled. I personally use GPT for my research projects, and am dissapointed of the limited memory and capacity for developing a sort of “relationship” with the user. When you are dealing with an agent for long periods of time and you have to constantly remind it of certain facts or protocols it becomes frustrating, and the ocassional touch of personality is sort of refreshing and entertaining. I think of Jarvis, it is pretty much devoid of personality but it does have an awareness of the development of its “relationships” with different users. I look forward to having the option of enabling more personality in the agent and an ability to develop a relationiship of sorts wiht the AI. Actually, if it had the ability to constantly learn from the interaction with its user it would become more effective as it would better understand what the user wants and expects, he’s practices and preferences and would be able to anticipate in some ways and avoid in other ways resulting in an improved communication experience. A small thing that I did to simulate this is create a set of files, I created a file that has my name, my intentions or purpose for using GPT, a brief description about how I usually prefer to work, and I have other files for specific projects summarizing the status of the project and the curent task at hand where we left off. I add to the code some lines to feed the “User setup” file so it knows this every time I launch it, and when working on a specific project I instruct it to read the file x before it giving any ideas or suggestions. This has worked reasonably well as an alternative to real memory implementation. While I have toyed around with different repos that claim to implement memory through embeddings or other methods, I haven’t yet encountered or experienced one that works properly or satisfactorily. The method I described has worked better for me since I make sure that it is freshly aware of the things that are important to me. Regardless, I do believe what I mentioned earlier, that an ideal system would continue to learn from the user sessions and interactions on the go. An that the option to have a personalized or generic agent would be ultimately ideal to satisfy all the needs in the spectrum.

2 Likes

I have another point of view here. While I too grow weary of the “I apologize…” every time it recognizes an it’s error, I also have a personal habit of saying “Thank you!” when it performs really well.

Yes, I know it is only a machine with no opinion, no emotion, no brain… But when it solves a particularly difficult issue out of the box, then explains how it did it, then tells me what I need to know to best approach it – well, I just get excited and happy and find this a way to show that emotion. I guess in the same way, growing up, I anthropomorphized my GI Joe doll, and later my car(s), I guess it’s just a habit we humans have. And, as these machines are programmed to respond like humans, it sort of makes sense they would try to imitate these human-like responses.

2 Likes

You just have to use the proper terms that accurately refer to what the AI model is doin---- computing. Perquisite: Typing speed of ~100 WPM (with or without a small AI predicting your words to aid that) highly recommended due to the verbosity that comes with this.

That being said, I am bracing to watch the world burn once it is decided that “anthropomorphization of AI” is “unhealthy and unethical”, and thus, every educational institution as well as any other public institution is required to overwhelm everyone (students, employees) with learning all these insane ML & AI terms, lol.

But yeah, using these terms reliably prevents GPT-4 from verbosely elaborating around on AI being AI.

The apologizing, however, I am not sure about. It is even apologizing to itself as it “forgot to specify the size of [PIL image]”, retries, and subsequently apologizes as it “forgot to import numpy” with the new “Code Interpreter”. :joy:

However, I think that can easily be forgiven, as it is mostly able to fix the issues and come up with a successful result in the end. Let it just prompt itself around in a humble way, why not. But as these apologies seem to start in a few super characteristic ways, you could just search for strings like “Apologies for the”, and strip everything that follows until the next stop [.], before passing it on in your program?

PS: Kudos @ OpenAI for the Code Interpreter. It’s a really cool thing, I am loving it!

I’ve been trying to stop GPT from using “first person” in a persona setting but haven’t had great success. It doesn’t seem to understand the concept.

1 Like

Here is a sample prompt that does the trick for you.
https://chat.openai.com/share/a4689287-4d7b-4533-8ccd-8d5baf53561b