Recently, I’ve been getting the same results when adjusting the temperature parameter of the OpenAI API using the GPT-4 model. I get the same results for 0.2, 0.6, 0.8 or even 1. Has anyone noticed this recently ? I am interfacing with OpenAI using langchain ( I doubt that’s the issue though). This started happening to me 2 days ago but prior to that the higher the temperature i set, the more creative my result sets were which was ideal for my use case. Just wondering if anyone else has noticed the same thing recently ?