Why is the temperature and top_p of o1 models fixed to 1 not 0?

Possible, although there’s been temperature-looking problems reported when using other languages and getting oddball 3rd language tokens, so the creativity might be used in the wrong place.

Then if you want variations to evaluate a best response from, temperature is also important, like the best_of API parameter for completions (that uses logit probability totals instead of AI to judge). best_of: 10 is wasting your money with no sampling variety.

The poor-mans version from last year…