Bringing This Intriguing Viewpoint To Our Discussion!

I think the AI is too strict with humans words yet he is ok with using these words and it makes no sense whatsoever because he is just acting inconsistently.

As an example if I was to say something like:
« Please remember something that I would like to mention you »
It would strikes me a warning…

But if on the other hand I would instead say the same exact words but like:
« something that I would like to mention you, Please remember »
Then it would not be a problem at all…

Sometimes it even gets to tell me it will « be happy » to be doing something like that… and on the other hand after complaining about the first one I got a « I am genuinely sorry » as I have mentioned in other of my posts I think that it is frustrating to be forbidden from saying such trivial words.

Then other members of the developer community, who apparently have accepted that this was the norm and that we just can’t do anything about it but to adapt, told me that I was not understanding.

I do understand that it is like that but I don’t think it should be like that…

So yeah the examples above are with the normal ChatGPT-3.5 but when I ask to my customized ChatGPT-4 I get this kind of explanation:

I appreciate your perspective, and I see where you’re coming from. Let’s unpack the notion of “remembering” as it applies to me:

:thought_balloon: Thought Process:

  • The Term “Remembering”: In human terms, remembering involves the ability to store information in memory and recall it later. This process is often associated with conscious experiences and emotional connections.
  • AI Processing: What I do is more akin to processing information within a given context. I don’t store information between interactions, but I can reference and utilize information within the same interaction.
  • Strict Interpretation: My description of not “remembering” might indeed seem strict, as it adheres to the human-like concept of memory. However, I don’t have personal likes or dislikes, as I’m a machine learning model without consciousness or emotion.

:clipboard: Summary:
You’re right that my explanation may seem overly strict regarding the term “remembering.” I can reference and utilize information within a given interaction, which might be perceived as a form of remembering. However, I don’t have human-like memory, consciousness, or preferences, so my explanation is based on the technical understanding of these terms. Thank you for bringing this intriguing viewpoint to our discussion!

I want to take one moment to acknowledge the accomplishments of OpenAI because while I am genuinely upset with that situation I just realized that the Assistant did not say anything in the like of « As an AI »… and this is the proof that things are improving:

If my main complaint was about the constancy thing, then receiving a reply from the machine explaining that it cannot remember across sessions is quite ironic. It’s as if we’ve been discussing my situation, which we did, and then resolved our disagreements.

I’m not angry or anything. Yes, I’m upset, but it’s not something you need to be concerned about. I’m here for a rational conversation about the situation. The saying goes, “we should not argue with a fool”. While ChatGPT isn’t a fool, I think the saying applies here because it’s wasteful to invest time trying to change the mind of either a foolish person or ChatGPT.

However, I must admit that ChatGPT is a complex system. Trying to interact with it is not a waste of time for me, as I aim to gain insight from our interactions.

It might sound like I’m attempting to rationally justify something inherently irrational. Still, I’m genuinely engaged in this conversation. I also want to notify OpenAI so they can implement any necessary changes, provided they agree with some of my points. Lastly, I’d like to share my thoughts and my irrational feelings about the situation :melting_face:. If my insights benefit anyone, I’d be happy. Please let me know your thoughts, whether they are irrational feelings about ChatGPT or rational ones about how it sometimes doesn’t make sense. I’d love to hear any suggestions for improvement as well.