Differences in Results between Personalized GPT via Dashboard and Assistant API

Good morning,

Recently, I have been experiencing unexpected behavior and would like to seek insights or solutions from the community. I’m working with Personalized GPT, both through the dashboard (chat.openai. com) and via the Assistant API, and I’ve noticed significant discrepancies in performance and results between the two approaches.

When I create and use Custom GPT via the dashboard, I observe that the model operates efficiently, processing and interpreting files in .xlsx and .csv formats without problems. However, when replicating the same process and instructions to configure the Assistant API (intended to integrate Custom GPT into external applications), I find substantially different and less accurate results compared to the dashboard version.

I’m looking for possible explanations or solutions to these performance differences. Has anyone been through a similar situation or have any insights into what might be going on? Is there any additional configuration or consideration specific to the Assistant API that could be influencing these results?

Any guidance or suggestions would be greatly appreciated.

Thank you in advance for your attention and support.