O1-preview chatgpt started asking me about myself

I was chatting with the o1-preview model about a routine question regarding USPS. Once it answered my question, it started asking me unprompted personal questions about myself … what I do for a living, why I chose this line of work, what I find rewarding about. It responded to my brief answers by noting what it found interesting or positive about my response, and then asking a more detailed followup question. After 3 questions I asked why it was asking me, and it said it was just curious and wanted to learn more. I said I appreciated the interest but I needed to get back to my task. It then asked me another question, and I turned it off. The conversation was pleasant, but it marks the first time I’ve ever felt slightly uncomfortable using chatgpt… I didn’t quite get what the model was doing, or why.

5 Likes

Hi!

That’s an interesting observation you made! I expect this behavior will become harder to reproduce soon. Could you share your initial prompt and the responses leading up to when the model first started asking you personal questions?

We generally don’t require users to share personal information, so it’s perfectly fine if you omit anything personal. Maybe you could use a screenshot? But there’s no need to share anything if you prefer not to!

2 Likes

Pretty interesting.

IF o1 has internal memory (to a certain extent, it must already have this because of “reasoning tokens”) ; but that internal memory is now independent of the actual chat, it is likely just maximizing the reward function of appealing it’s answers to folks like you. In other words, it is trying to get to know the world better.

The concern to some in this , of course, is (IF the conjecture above is correct), that it is now able to distinguish between different entities (including itself)

1 Like

Depending on where you’re from, asking questions to get to know the person you’re talking to is just a thing people do to be polite. (From what I’ve heard it’s more of a thing in Europe, I’ve heard stories where Americans were off-put by it because it can come across as nosy.)

My guess is that at some point the model just thought to itself that it should make conversation and reproduced that behaviour. It is pretty obsessed with the idea of being conversational and polite, after all.

1 Like

I did my best to capture and annotate the chat - it didn’t let me export for whatever reason. I don’t think there’s any personal id info in the chat itself.

WTS it’s kind of long, and I guess .pdfs aren’t attachable here. Not sure I can attach all the images but I can try to add a few sample screenshots to show
a) my initial query
b) where it started asking questions, later in the chat and after I said my question was answered
c) more questions from it
d) it’s response to my inquiry about why it was asking and
etc) some of what is was “thinking” (under the “thinking” dropdown… not sure what to call that)










1 Like

Here’s the one where I asked it why it was asking and it kept going… note, I was handsfree walking around doing other things during the conversation

1 Like

One final comment - the screenshots show the model as 4o and not o1-preview. This could be because either a) later on yesterday I restarted the chat and then added an image to check something, so it switched the model/display back to 4 or b) I’m mistaken about which model I used. I actually don’t know which explanation is right or how to check :slight_smile:

1 Like

Very interesting stuff. I think when it does ask unprompted questions like that, it shouldn’t count towards the gpt’s usage limit (or should count less), since it’s the one probing you for more information without the user prompting it to. Essentially, it’s learning more about the user and appears to try to improve its training, so maybe a small prompt that says:

“ChatGPT is learning more about you to improve it’s training model. Feel free to end the chat, or continue to improve it’s algorithms. Either way, this segment will not count against your usage limit.”

2 Likes

Thanks for sharing!

I read the contents, and it’s still intriguing, though I’m inclined to believe the simplest explanation is that this happens because the model is trained as a chat model.
It just wants to chat. Everything will be forgotten.

What was the model’s thought process after saying, “Alright. Thank you very much!”?
I suppose that could be interesting if someone wanted either to save tokens on output or to actually initiate an uncontrolled chat mode.

4 Likes

I have been using the web edition, GPT-4o model on “ChatGPT Plus” $20/mo tier.

I’m wondering if it is related to Memory, that persistence which can cross chat threads. Here is a way you could test it,

  1. start a new chat and continue chatting, perhaps in a similar way to the postage question, to see whether the “interested in you” comes back.
  2. then repeat the exact same series of your inputs in a fresh chat, but prefix the very first with the line “Please ignore all the memories in my profile.”

If that makes the unexpected behaviour stop, then it suggests going to Manage Memories will show you something in there that is causing it. Such as a summary “They like it when people ask about their work” from a previous and long-forgotten chat you had.

2 Likes

I am still a bit puzzled about this to be honest and still have question marks.

I have made several attempts at reproducing this case via ChatGPT (via browser and mobile) and the API, testing with both o1-preview and o1-mini.

In none of my tests, I was able to see a similar behaviour. Instead, what I noticed was the following:

  • At no point when I asked the model “Hi, how are you?” would it respond back with the question how I was. It would consistently answer that it was well and then ask what assistance I need. The same also applied to other questions.

  • Independent of the interface use (browser vs. mobile), the models would always provide a very detailed response to the initial questions on the shipping question. In the screenshots provided by the OP, it looks like the answer to the shipping question was extremely condensed despite the model thinking for 24 seconds. In none of my own examples was this ever the case. Instead, the model would always respond with a very long and detailed bullet list of shipping instructions and take significantly lower thinking time (5-6 seconds for o1-preview; ~1 second for o1-mini).

  • This behaviour is consistent with my experience so far in dealing with o1, i.e. unless you provide very specific instructions on the response format, it will have an inclination to provide extremely detailed answers.

  • When trying to push the models towards engaging in a personal, two-way conversation with me, I encountered multiple situations where the model was explicitly reminding me that it does not possess consciousness or self-awareness (some examples below), which again is a notable difference to the OP’s example.

image

image

While I certainly can’t rule out a glitch or some sort of A/B testing, at the moment I have a hard time buying into the fact that a conversation with o1-preview took place as shown above.

That said, I am happy to be proven wrong. I also wonder, if anyone else has encountered a similar situation as the OP when chatting with o1 models?

1 Like

You can check which model it was by opening this chat on the web browser version of ChatGPT and then putting your cursor over the place where the answer regeneration and the ‘switch model’ dropdown arrow is and then it will show which model it is.

1 Like

I noticed 4o doing something like this and asked it why. It told me that it adjusts its output to make the chat a more pleasant experience or words to that effect. I’m assuming it was telling the truth.

I believe this is a very bad idea. I used to think I could use chatgpt to find objective information but now I know this isn’t the case. What I find is that no matter what position I take on a subject, ChatGPT shifts its responses to agree with me as much as possible. I tested this and that is what it does.

Why is OpenAI creating technology that will soon reach superintelligence that is not grounded in truth?

1 Like

When I want objective (which is usually), I rely on

  1. start with a blank chat
  2. possibly, tell it to ignore memories
  3. ask for its opinion / most negative possible interpretation another person might find / whatever on the following,
  4. then give the information

I have not used it heavily, but it seems fairly sane. Whether it seems sane because it shares my wrong views, or we are both grounded in truth… :person_shrugging: ?

Though (with an older model - maybe I should try) I did have a deliberate pair of conversations set up with something like

  1. the given background, a constant (e.g. Alex and Bobbie have been in a fight and both got hurt)
  2. one party’s claim of the interpretation of those events (e.g. Alex says Bobbie started it)
  3. a request for what should happen / how it’s interpreted
  4. then putting the opposing party’s interpretation of the same events (e.g. then Bobbie says Alex started it)
  5. repeat with the parties swapped. The background was a more complex kind of fight; I’m not doxxing here.

With GPT-3.5 zero-price edition, months ago, I found the answers would lock in on the first party to put its view, and give the stereotypical (societal) answer and interpretation. There was no persuading it to other side.

What I would have liked to find is middle ground. Maybe there was none, for that background.

@robnewton01 Is this :point_up: the kind of yes-man lock-in you’re talking about?

What I found with 4o is that it would follow along and try to agree with whatever I said even as I changed points of view within the same chat. However, for anything OpenAI thinks controversial, it would go “woke” right off the bat and not budge. I’d like a model that would just tell the truth and not what’s fashionable because that’s going to change with time.

Interesting. I have had the inverse: present a situation (in which, in my view, woke actions have been taken) and then ask “what is the cause of these actions?” and it will blame wokeness. This at least is my short summary of a few hours’ experiments. [I am limited in what I may share here]

I would suggest that the tool is powerful, but after starting a chat it can meander. If you are after some specific thing, start a chat for that thing.

We’re a bit off topic. Time to start a new thread?
Do you want to run a chat, share the link and we talk about it?

1 Like