If I supply the exact same prompt to ChatGPT 3.5 and to the API 3.5-turbo, with no context included in either case, I get different responses. Again this is with zero saved context in ChatGPT and nothing but one user message in the API version (and a temperature of 0). ChatGPT seems to give me a better response.
I tend to work out prompts in ChatGPT first, then implement them in the API.
What could be the issue, what should I research and read to have a better understanding? Thanks!