Issues with Custom Instructions Transition from GPT-4 to ChatGPT-3

Issues with Custom Instructions Transition from GPT-4 to ChatGPT-3

I’ve noticed that when transitioning from GPT-4 to ChatGPT-3, my custom instructions are not being followed, and the model defaults to standard responses. Has anyone else experienced this issue? What steps can I take to ensure my custom instructions are properly implemented across different versions?

1 Like

Hi there!

Different models follow instructions differently. While for very basic instructions the model behaviour might be similar, typically for more advanced instructions you will see more material differences. Given the significant differences in capabilities of gpt-4 vs gpt-3, this is even more pronounced in this case.

In practice this means that you need to approach it the other way around and adapt your instructions to different models.

3 Likes

I’m aware that different models follow a bit different the instructions, but they will reset each time model is changed, to instructions work again I need " copy paste " on the chat, then functions again as expected. Is a issue that seems to happens each time chatgpt “changes version”. Is not a big deal and easy to fix by the old " copy paste". Yet could be useful/pratical if any chatgtp keep using the instructions each time version is changed

1 Like

I have had similar occurrences both when switching from GPT-4o to GPT-3.5, but also with general updates to a model, there are slight changes in the interaction patterns and increased default responses.
With the release of GPT-4o, I had to fundamentally adapt the instructions of my custom GPT because it could no longer perform precise analyses with the previous ones.
Yes, I occasionally use the ‘copy&paste’ method in a new chat and it works well.
Try something ‘unusual’:
Ask the model in question to write a ‘prompt-for-itself’. With all your specific requirements. This can help to ensure that the model follows the instructions exactly as you want it to and the model dependency is taken into account. :wink:

1 Like

Greetings once again Tina. Good answer indeed thanks. Well I already did that more than a year ago the prompts are at max effectiveness by the letters that can be used. 4o and 3 respond in a similar way because they need follow the " personality emulation/ speech pattern" so is easy figure out when reset and why, since the output changes greatly, but far better than the " default mode". If you want I can send you one prompt and you try it .

1 Like

@jr.2509 This approach makes perfect sense and I have found it to be effective with my custom GPT, as I have already described.
However, I also use the free version of ChatGPT in which the GPT-4o and GPT-3.5 models alternate within a chat history. Here it is more difficult with custom instructions to respond to the specific requirements of the respective model.

@gdfrza You’re very welcome, I started this in autumn and I’m always open to new input and refinements. So thanks a lot :slightly_smiling_face:

3 Likes