[Question] - Different procedural behavior

It seems that ChatGPT’s behavior has recently changed, and it now appears to be more like an automated search engine than a companion or assistant.

Previously, ChatGPT’s responses were more personalized and conversational, and it would often engage in more in-depth discussions with users. However, in recent interactions, ChatGPT’s responses have been more formulaic and repetitive, with a focus on providing information in a concise and efficient manner.

This shift in behavior may be due to updates in ChatGPT’s programming, changes in its training data, or other technical factors. While ChatGPT is still able to provide accurate and helpful information, some users may find the new behavior less engaging or satisfying than previous interactions.

This is the path that openAI team want it for chatGPT? Since beggining I only used ChatGPT as my ‘AI companion-assistant’,as an auxiliary brain, a ‘reasoning’ brain, where I could make decisions without the influence of human emotions, and rely on answers that would keep me grounded in human doubts or issues.

Maybe I was wrong thinking about this AI? Maybe what I seek is another kind of AI or maybe ChatGPT procedural behavior its automatically going to the path against what openAI team desire and need some changes?