Why Does Davinci Suddenly Love Daunting

I have recently noticed a dramatic increase in the word “daunting” being returned by Davinci. Has anyone else noticed this also? It’s such an odd word, but I’m more concerned as to why I’m observing a sudden increase in its return rate. Has the model changed?

Welcome to the community.
Unfortunately I have stopped using Davinci in favor of the gpt-3.5-turbo model.

But before that I had not noted any preference for such a word.

Good luck :+1:t3::four_leaf_clover:

They all seem to be constantly changing… GPT-4 won’t be able to solve a logic problem in the morning but then in the afternoon it magically can… That sounds great but it can just as easily regress… They need a better versioning strategy (or at least more transparent) and then freeze certain versions for consistency sake…

It might be a server load thing, the responses seem to be slightly lower quality when demand is especially high

This will helps the the model decide which words to use or not to use


The temperature in chat.openai… is high, and it will give different results on each run.

I guess they also add the date parameter to each request, if so, you are right and the result in the morning and afternoon will not be the same.

For example, when it’s afternoon, let’s say it’s 13, will affect the results to be more negative.

“the clock strikes thirteen" it’s the first sentence of dystopian novel 1984.

This was en example, and more a presumption based on the idea if GPT model relay on probability of the next words, I can imagine how different dates might impact on the different results.