Finetuned gpt4o is not deterministic

This was just revisited recently.

Translation is naturally a quite uncertain task. Give the assignment to two different human translators, you’ll get divergence within a few words if not the very first. An AI model, however, can explain why statistically instead of by intuition:

A very useful tool not (yet? Please) available with OpenAI would be assistant response continuation and completion. For developing a particular work or developing training, a native speaker could spot those highlighted token positions with high uncertainty or close ranks (or just abnormalities in the writing itself that becomes awkward), edit manually or choose from token alternates, even those close enough to flip despite deterministic attempts, and sent the AI back on path from that point to still produce, in total time, far faster than one might write themselves.

1 Like