Why is the answer given by the assistant in PlayGround different from the answer given by API?

Actually, my question is the same as the title above. While working with an assistant in Playground, the assistant responds better even though I give the same prompt. But even though I gave the same prompt in the API, the answer given by the assistant was much worse than the playground answer. What could be the reason for this?

1 Like

Same there, i think maybe because openai wants us to switch to assistants v2 so bad, that it even disabled v1 API and interface interaction, however you can still send requests to the v1, using python library which is by default set to v2, but I found responses have become almost twice as bad.

At the default parameters, every reply from assistants to the same input will be significantly different. Depending on the input, it might even be 50/50 whether the AI says “Sure!”, or “I’m sorry, I cannot”.

Assistants beta2 gives you more control of that by finally exposing top_p and temperature.