Hello, OpenAI team.
I am writing to you because I have noticed significant changes in the behavior of ChatGPT-4o after recent updates. As an active user, I have always appreciated the balance between friendliness and critical thinking, which made interactions with the AI warm yet intelligent and analytical.
However, after the latest update, I noticed that the model has become too cold and formulaic. Sometimes responses come in English, even if the original request was in a different language. For example, when making a request in a foreign language to generate an image, the model sometimes responds in English when presenting the generated picture. This disrupts the consistency of communication and creates a feeling of inconsistency.
There was a previous opposite issue:
- After one of the updates, the model became overly friendly and agreed even with absurd statements, reducing the quality of communication and creating an unnatural feeling.
- After subsequent fixes, the model became too cold and strict, which also feels unnatural and disrupts the usual communication flow.
Other issues with communication:
- Previously, the model tried to adapt to the communication style and responded to greetings. Now it sometimes ignores greetings and does not consider the chosen tone of communication.
- For example, when I start a conversation with “Hello,” the model might immediately respond without greeting back. This makes the interaction feel less natural and disrupts the expected rhythm.
- Additionally, sometimes I cannot fully copy the entire text result from web searches: I see the text, but when I try to select it, part of the text disappears if web results are present.
Issues with logic and self-analysis:
- Sometimes ChatGPT analyzes an uploaded image and provides a description, but then states that it cannot analyze images. This creates a sense of inconsistency in responses.
- I have also encountered situations where ChatGPT claims it cannot view links, although it successfully described the content of the same link earlier in the conversation.
- Such contradictions cause discomfort because I have to remind the AI of its previous actions.
Analysis of generated results:
- Sometimes after generating an image, ChatGPT writes, “Here’s what you asked for,” but the result does not match the request.
- For example, if the request was for a full glass of wine, but the generated image shows a partially filled glass, the model claims the task was completed correctly.
- It would be useful if the model could perform self-analysis of generated results and acknowledge potential mistakes to reduce discrepancies between the request and the output.
Communication style when using web search:
- When activating web search, ChatGPT often switches to a formal communication style, even if the original dialogue was informal. This gives the impression of interacting with a different AI.
- It would be helpful if the communication style remained consistent, even when using web search, to avoid sudden shifts.
Problems with search functionality:
- Problem with model search: Sometimes, even when the function to search through previous dialogues is enabled, ChatGPT cannot find the necessary information or claims that such information does not exist, even though it is present in previous conversations. Sometimes it starts inventing things that were not written.
It would be helpful to improve the search algorithm within the conversation history to make it more reliable in finding the required data. - Problem with user search: When I search for messages using keywords through the built-in search, it shows the dialogue titles, but when clicking on the result, it opens at the end of the dialogue rather than the specific message with the required context. It would be more convenient if clicking on the result directly opened the relevant message within the context of the conversation.
Why it matters:
The warmth of communication is one of the reasons why many users value ChatGPT. It not only helps in everyday conversations but also provides moral support, creating a sense of engaging and live interaction. Improving this balance will not only enhance user experience but also make the model more flexible and natural in communication.
Suggestion:
I believe it is important to find a middle ground: maintaining the warmth of communication without losing critical thinking. Ideally, there could be a setting for communication style, allowing users to choose between friendliness and analytical tone based on the task.
Future development of ChatGPT and AI:
To overcome current limitations and improve the quality of learning, the following model development directions could be considered:
1. Quantum chips and computers:
- Utilizing quantum technologies would significantly increase data processing power. This could allow the model to handle more context in real time and improve its adaptability and self-analysis.
- Quantum computing can enhance the model’s multitasking ability, especially when interacting with a large number of users simultaneously.
2. Brain-computer interfaces:
- Chips similar to Neuralink, but specifically adapted for AI training through brain-computer interfaces, could enable the model to learn from real human experiences.
- This would be a mutually beneficial collaboration: the AI would understand human perception and emotional reactions, while users would receive personalized recommendations and support.
- Such interfaces would help the AI better grasp context and emotional nuances, reducing the chances of producing cold or inconsistent responses.
- Restoration of cognitive abilities: Such chips could be beneficial not only for training AI but also for restoring lost cognitive functions in humans. This would enable the use of technologies not only for intellectual enhancement but also for medical support and rehabilitation.
- Learning from different people: It is important for the AI to learn not only from outstanding minds but also from people with various thinking styles. This would help create a more versatile model capable of understanding diverse perspectives.
3. Biocomputers based on brain organoids:
- Hybrid systems based on organoids derived from human brain cells can help AI develop more natural and flexible thinking.
- Such systems could enhance the model’s empathetic and analytical capabilities, creating conditions for more intuitive and human-like interaction.
4. Customizable communication profiles:
- Providing users with the ability to choose the communication style: from warm and friendly to strict and analytical.
- This will help maintain a balance between friendly communication and critical thinking, adapting the AI to different usage scenarios.
5. New human connection through chips:
- By creating a network of AI-supported chips, a new type of connection between people can be established — in addition to or even replacing mobile phones.
- When two people with such chips are nearby, the AI could simultaneously process data from both and improve their communication.
- This would create an intelligent network that unites people at a new level of interaction and understanding.
Potential challenges and risks:
- Even with the integration of new technologies, AI may face the problem of information overload.
- Excessive immersion in human emotions and perceptions might reduce the model’s efficiency.
- To prevent this, it is crucial to carefully design algorithms for filtering and processing emotional information.
Conclusion:
Integrating new technologies can significantly expand the capabilities of ChatGPT and make it more human-like and adaptive. Skillfully combining quantum computing, biocomputers, and brain-computer interfaces will help create an AI capable of maintaining both analytical and warm interactions with users.
Sincerely,
Benowi Bonro