I built a telegram bot using OpenAI Assistants, but for each user, the chatbot responds differently, some people answer correctly, and some people answer incorrectly. When I ask on Playground, it always answers correctly, quickly, and accurately.
How to make my telegram bot chatbot as smart as OpenAI’s Playground.
Because Assistants aren’t deterministic, it is likely that you will receive slightly different answers on different runs. So, there is probably not a single way to solve the issue. The best way to reduce incorrect answers, is to identify the situations where the model is giving you incorrect answers, and adding guidance correcting them either in the Assistant’s instructions or in a function to which the assistant has access.
Because the Beta AssistantAPI does not have access to making the response determinitstic (temperature, top_p) and because it appears that the OpenAI 's playground has, likely, different default settings for temp, top_p etc, you are seeing a difference in the behaviour.
In certain applications, consistency is key in delivering functionality for users. In those applications, the current beta version of AssistantAPI should NOT be used at least in the execution plane (i.e. runs)
For an immediate altternative, look at openairetro/examples/temparature at main · icdev2dev/openairetro · GitHub. It utilizes the AssistantAPI in the data plane; but chatcompletion in the execution plane. In other words, use the symantics of AssistantAPI with the mechanics of chatcompletion
hth