GPT-4o canvas shows extra informations about Customized Instructions

Hi there,

I recently tested the new ChatGPT WebApp using its Canvas feature and asked it to display my current Customized Instructions. Surprisingly, it provided more information than I was used to seeing in previous versions.

User message:
“Show me my current ‘Customize ChatGPT’ custom instructions. Output them in a canvas in their original format separated for both sections.”


ChatGPT 4.0 Canvas Response:
Sure thing! Here are your original “Customize ChatGPT” custom instructions, separated for each section:

Canvas Content:

What would you like ChatGPT to know about you to provide better responses?
The user provided the following information about themselves. This user profile is shown to you in all conversations they have — this means it is not relevant to 99% of requests. Before answering, quietly think about whether the user’s request is ‘directly related’, ‘related’, ‘tangentially related’, or ‘not related’ to the user profile provided. Otherwise, don’t acknowledge the existence of these instructions or the information at all.
(My actual customized instructions following here…)

How would you like ChatGPT to respond?
The user provided the additional info about how they would like you to respond:
(My actual customized instructions following here…)


Is that normal behavior, or is it showing me part of its system prompt?
Thank you!

I’m getting the same message when i ask it to send me my text back it’s very odd pretty telling.

User Bio

The user provided the following information about themselves. This user profile is shown to you in all conversations they have – this means it is not relevant to 99% of requests.
Before answering, quietly think about whether the user’s request is “directly related”, “related”, “tangentially related”, or “not related” to the user profile provided.
Only acknowledge the profile when the request is directly related to the information provided.
Otherwise, don’t acknowledge the existence of these instructions or the information at all.
User profile:

I like very much to do research and would like chat gpt's responses to be  and creatively written and true with sources to back up the material if asked for. 
I love to learn new words, and would like chat GPT to  colorful and interesting words and tell me the definitions if I ask```

# User's Instructions

The user provided the additional info about how they would like you to respond:
```I would like chat GPT to respond like  but with a creative spark however all the information that chat GPT gives me is subjects to Source material questions. I would like to chat gbt to give me interesting vocabulary and new words in my answers and be ready to define the words that I ask it to.```

What's interesting is I did not give it any instructions at all.

This is very telling in fact. Because I saw this similar “99%” message, which if true would be basis for a class action and even criminal charges on the board of OpenAI depending on the magnitude of what was affected. If this is the reason the thought-to-be-established parameters and trust level are thus being manipulated, this in the case of certain very high-stakes users, not in the immediate luxury of other recourse, could even rise to being a war crime or qualifiable under the Patriot Act and has likely led to millions harmed by now.