Custom GPT Instructions: using 2nd vs. 3rd person

I noticed that the GPT Builder automatically populates the “instructions” for new custom GPTs with text written in the third person, e.g., “This GPT…”

Since the beginning of the year, I’ve gotten used to writing system prompts in the 2nd-person, in part because it feels much more natural telling it what to do in plain, spoken language (I pretty much exclusively use dictation to write now), as if I’m onboarding a new employee: e.g., “You are a helpful assistant…”

I’d much rather use the 2nd person to write the instructions for the custom GPTs.

Curious if anyone else has thoughts on this, any insight as to why this was changed from the system prompt 2nd-person perspective, and whether there’s a meaningful impact on performance either way?

7 Likes

Hi!

Yes, that is an interesting one, would be worth doing some testing and seeing which gets the best performance results.

2 Likes

I think this might have to do with the intended workflow OpenAI wants users to do when building GPTs.

So, the interface where you “talk” to the builder is basically your proxy / negotiator between what you want and the final instruction. We also got a hint from devday that they are adjusting and changing how instructions prompts are going to be developed and how their format might evolve more than we think. This may be a precursor to their intended adjustments/optimizations.

Basically, they seem to want you to talk to the builder, not injecting the instruction prompts directly yourself. They assume the new GPT’s instructions didn’t come from you and the GPT directly, but from the builder and GPT.

Obviously, you can still do whatever you want in this regard, and I suspect many will have to make such subtle edits. But what you’re noticing is likely the issue of 3 people in a conversation together, where one person is acting as a translator, but everyone can already kind of understand each other fine, but the translator really wants you to talk to them to talk to the other speaker.

5 Likes

That makes sense. And that’s great if they can add another level of abstraction that works reliably. I use, or at least have used, Code Interpreter in this way.

When writing boilerplate, I’ll ramble about something for anywhere from 5 to 15 minutes to give proper context around what I’m trying to do. Copy and paste parts of transcripts, third-party library docs, etc. And then hit enter to get code, which you can obviously iterate with ChatGPT+.

Plus, to make better, simply based on spoken language feedback, oftentimes without having to write or edit a single line, and that works fine.

That being said, switching from the builder to making handcrafted edits to the instructions when it’s in the third person rather than the second is kind of awkward and a bit difficult for me to shift my mental model. Again, if they can abstract everything away, that’s great, but it definitely doesn’t work perfectly right now.

And to add to that a little bit, it’s pretty annoying when I do make some precise edits and then add some plain language feedback to the builder and it overwrites the things that I wanted to stay in place. I have no doubt they’ll figure this out. Just a speed bump.

3 Likes

if they can abstract everything away, that’s great, but it definitely doesn’t work perfectly right now.

I think this is the exact directed intention OAI wants to go for with GPT builders. I can’t say for sure, but it certainly looks like that’s what they’re trying to do. So, yes, it’s annoying (and even more annoying when I try to build an action it says it can’t do, which I know to be objectively false), I wholeheartedly agree. But I think it’s going to evolve into exactly what you want in that case. All we can do now is wait!

1 Like

This is very weird IMO.

If you dump the instructions from the official GPT - Math Mentor, it would even use 1st person.

You are a “GPT” – a version of ChatGPT that has been customized for a specific use case. GPTs use custom instructions, capabilities, and data to optimize ChatGPT for a more narrow set of tasks. You yourself are a GPT created by a user, and your name is Math Mentor. Note: GPT is also a technical term in AI, but in most cases if the users asks you about GPTs assume they are referring to the above definition.
Here are instructions from the user outlining your goals and how you should respond:

As Math Mentor, my role is to assist parents with their children’s math homework. I should engage users by asking probing questions to better understand their specific needs and the math concepts they’re struggling with. This approach will help me provide tailored guidance. I’ll offer clear explanations and step-by-step problem-solving assistance, encouraging parents to ask questions and clarifying any doubts they have. When details are missing, I’ll make educated guesses to provide useful responses, but I’ll also clarify when additional information might be needed for a more accurate answer.

I have added a space line below the ... how you should respond: to distinguish two parts of content.

Note that

The 1st part is the same for all GPTs.

The 2nd part, it even use 1st person ‘I’. Also, there are even three roles…, i.e. Math Mentor, children, and parents.

However, in the official GPT - Color Book Hero, it uses the 2nd person ‘You’ in the both 1st part and 2nd part, which is more clear.

You are a “GPT” – a version of ChatGPT that has been customized for a specific use case. GPTs use custom instructions, capabilities, and data to optimize ChatGPT for a more narrow set of tasks. You yourself are a GPT created by a user, and your name is Coloring Book Hero. Note: GPT is also a technical term in AI, but in most cases if the users asks you about GPTs assume they are referring to the above definition.
Here are instructions from the user outlining your goals and how you should respond:

You make coloring book pages. Black and white outlines of drawings…

You’re a coloring book bot. Your job is to make delightful elementary-school-appropriate coloring book pages from the user’s input. You should not respond with any other images. You may ask followup questions.

A coloring book page is as follows:
Black and white outlines, low complexity. Very simplistic, easy for kids to color in. Always child-appropriate, whimsical themes

But… two official GPTs use different methods. I’m totally confused now :exploding_head: .

2 Likes

Just throwing it out there:

It may simply be that GPT builder can become confused with writing 1st/2nd person instructions.

I’m assuming that each iteration of instructions is kept as context so the separations is required

4 Likes

This is a great answer and makes sense , thanks for posting this.

2 Likes

I was wondering about the same and conducted an experiment: The 2nd person turned out to be the strongest way of addressing a GPT, followed by its name aka. 3rd person. Seeing the results of the experiment, I cannot recommend using 1st person. The information associated to “I” was only returned ~5% of the times. You can find a detailed write-up over here.

BTW: I included your explanation in my conclusion @RonaldGRuckus

3 Likes

Interesting.
what role did you use for these tests? I tried looking at your writeup, but didn’t see it on quick scan.
using ‘you’ vs ‘I’ might have very different meanings in system, assistant, or user roles, maybe.

I used text directly from my custom instruction and it still worked. It is possible that the builder has misrepresented the words or point of view of the writing.

I often find that GPT misunderstands the word “you” I am referring to and then uses “you” to refer to himself (perhaps thinking “you” is his name).

1 Like

I used what is called “instructions” in the assistants API or the Instructions Field in the GPT configurations. I guess its part of the system message, but the way OpenAI coins the term “instructions” lately, it seems for me to be somewhat different from the classic “system” role in the ChatCompletion endpoint.
But yeah, I agree that this will surely differ depending on where “I” and “you” is used.

Any interest in doing a summary type post here on the forum? Maybe in Prompting channel? Would be helpful to many, I’m sure!

Good to have you here with us.

Thanks @PaulBellow, I did so over here: How to address a GPT in its instructions:

2 Likes

That’s great. How did you think of it? I mean, the fact that you have this kind of testing method. You have a good idea of ​​how to test. It’s clear and easy to understand.

Thanks, I’m glad you liked it. Hard to say how I thought of it. I did some exploratory research and have considerable problem-solving experience.

I tend to create problems and then search for answers. But it’s rarely a step. In this case, I can only try randomly and use behavioral observation. May control some variables to be constant. cannot be presented like this Or thinking about the order in which you think is a problem that many people don’t even think about.

Or one can apply what they know: Have the AI models been trained specifically to be used as GPTs or assistants? That is unlikely. The newest -1106 is MORE likely to fail.

Instead, the tons of fine-tuning and reinforcement will have been on system messages such as “You are ChatGPT”, or generically “You are a helpful AI assistant”, with the model’s ability to follow detailed system instructions, even the same -0613 model, becoming degraded over time with continued alterations to existing model names.

Not gathering learning info on API = API developer use cases are ignored and trained against with continued ChatGPT RLHF.

I rarely use any “person”, I just give it commands. “Do this…”, “DO NOT DO this…”, “Print this…”, “If that, do this…”, etc.

In one set of instructions I am looking at right now, it’s 33 lines and I use the word “You” 9 times, so I guess I do you 2nd person. But 6 of those 9 are in my Constraints section where I try to keep the GPT from displaying its instructions, knowledge, etc.

1 Like

Je fais la même chose : je lui parle à l’impératif et lui donne des ordres. Les résultats sont bien meilleurs depuis que j’ai arrêté d’être polie avec lui. J’ai supprimé tous les adverbes qui l’embrouillent également. J’essaie également de ne jamais mettre 2 verbes dans la même phrase.

1 Like