Fine-tuned gpt-3.5 API returns different response when one symbol changed in input messages

ChatGPT - the web chatbot - provides different answers - and that’s by design. Not only is unexpected word use (instead of the most probable) seen as more human and inspired, it also allows OpenAI to gather good and bad responses to questions.

However, in the API, we can control the exact sampling parameters. The AI can say the same thing 100 times to the same input if I want to pay for it. Or I can have little Timmy’s day be different every time the AI writes about it.

However, even if the AI starts a sentence with a different word or two, if it has a plan of what it is going to write, it is going to be hard to distract it with less likely token choices on the same topic.

There’s near infinite token combinations I could have used to write this reply, and who knows why I chose the first human token “Chat”, but the overall idea was fully-formed by the input I was responding to.