Performance Differences Between OpenAI Playground and Assistants API

Hey,

I’ve noticed significant differences in the performance of my AI assistant between the OpenAI Playground and the Assistants API. In the Playground, the assistant responds accurately, respects the prompt, and correctly utilizes the knowledge document as a knowledge base. However, when I deploy the same assistant via the Assistants API (integrated with platforms like Voiceflow and my website), the responses are less precise, it seems to struggle with accessing the knowledge document, and the prompt isn’t fully respected.

Has anyone else experienced this issue? What can be done to ensure consistent performance and prompt adherence across both environments?

Thanks for your help!

1 Like

You might want to check out the Threads in your dashboard, to see if everything in there is as you expected. (Prompts, files etc. ) You should be able to see threads in your left menu, or check settings in the org to enable it.

2 Likes

I did check the threads but found no explanation for the problem… But still thanks a lot!