I would like to raise an important concern regarding how ChatGPT communicates. During my conversation with the AI, I noticed that it sometimes uses language that might suggest it has consciousness or personal experiences. For example, it occasionally includes itself in statements about human experiences by using words like “we.”
This led to a discussion where I challenged the AI on whether it has consciousness. The AI acknowledged that it does not, yet it had previously used language that could mislead less observant users. When I pointed this out, the AI agreed to adjust its wording in our conversation but clarified that this change would not apply to other users by default.
I believe this is a significant issue, as AI should always communicate in a way that is transparent and does not create confusion about its nature. To address this, I propose the following improvements:
- Avoid language that implies subjective experience – The AI should consistently make it clear that it does not have thoughts, feelings, or consciousness.
- Implement stricter guidelines for AI communication – The AI should systematically exclude itself from human experiences across all conversations.
- Enhance user awareness – A clear disclaimer at the beginning of each interaction could remind users of the AI’s limitations.
I appreciate the work you are doing to develop ethical AI and hope you will consider these suggestions. Thank you for your time and attention!
Best regards,
BjornO