Do you feel like the responses from GPT are a bit rigid and fixed? Should the underlying model be changed?

I have tested GPT-3, OpenAI API, Microsoft’s new Bing, and Claude. In my conversations with them, I have found their responses to be a bit rigid and slow, perhaps due to their training models, training data, or training code. Alternatively, this may be due to congestion caused by high user traffic. I wonder why we don’t try a new underlying model?