GPT 3-5 not listening to prompt at all while 4o provides correct output

GPT 3.5 and GPT-4o, when given the same prompt give very different answers. (Gpt-4o follows the prompt and 3.5 ignores it again and again)

Here is my prompt

If The question does not have to do with the [Document name]
please gently remind them that you are unable to answer unrelated questions. *

Gpt-4o has no problems complying.
Providing correct output each time.

Does anyone have any advice for how to rewrite this? I do not want to have to use the more expensive model for basically no reason.


With 3.5 being cheaper, what I usually do is include a one-shot at least… This means giving it a concrete example of what you want in the system message.

Is that your entire prompt?

Thanks I will try that. There is a little more. But I basically cut out everything to try to get it to comply with this one rule and I will add more when it responds correctly to this one. Thanks for the idea.

1 Like

No problem. It might not work, so if you still have problems, feel free to come back. Curious if it works for you or not, though! :slight_smile:

It does not work at all.
gpt 4o listens to the instructions right away. So it is something with 3.5 not with my instructions.
Thank you anyway. It will be simpler just to use the better model at this point.

Fair enough. If you share the prompt, we might be able to improve it a bit, but if you’re okay with paying more, that’s fine too. With 3.5, showing one or two examples of what you want in output can sometimes make it perform better than a bigger model…

I have found that sometimes when gpt4o changes to gpt 3.5 on a free account, it says things like ‘can’t help with that’. Sometimes a simple ‘why not?’ results in ‘ok here are the results you asked for’ (and sometimes not).