Custom GPT vs API + System Prompt
Question:
I created a prompt for a Custom GPT and it works very well.
Using Vercel, I also built a UI that calls the API. Before starting, it reads a system prompt (the same one used by the Custom GPT) so that the behavior is consistent.
And it actually is: interactions follow the expected tone and flow.
However, when it comes to generating content, the responses are shallow — unlike the Custom GPT, which provides excellent output.
To isolate some variables, I had external users (using ChatGPT with no memory) access the GPT directly, and they also got high-quality results. Meanwhile, the UI + API version remains overly generic.
Edit: I used the
[ { “role”: “system”, “content”: “system_prompt_01.md” }, { “role”: “user”, “content”: “la domanda dell’utente” } ]
and
Temperature: 0.7
Top P: 1.0
Any ideas?