GPT's not consistently following instructions

A couple of the GPT’s I’m developing will not consistently follow the instructions provided. specifically, utilizing the internet function when the provided documentation falls short, or when the knowledge base does not have the information required.

Is there a specific parameter for a prompt to make the instructions fully integrated into the model? I also don’t want the GPT to tell the client that it doesn’t have sufficient information, "so what would you like me to do about that? "

It’s frustrating as all heck.

3 Likes

Machine learning is all a bit ‘soft’ for now. Expect that you will receive bad replies now and again and account for it is the only real approach that works. In my experience gpt4 has become much more reliable with each update and with a little hand-holding it works quite well these days. Over the next months I would be surprised if it didn’t become very reliable. Using gpt4 to check gpt4’s outputs works amazingly well.

Understood. It’s just irritating to constantly be asking the model to “please search the internet to find the answer”, when it’s in the instructions explicitly.

Hear that.

Couple of thoughts; I’ve found that telling it not to do stuff can be problematic especially if you say don’t do this, this, this, and this, it seems to just not work. The smallest amount of text possible and proactive guiding so that it will have a bias towards doing what you want as opposed to telling it not to do stuff works night and day better. Any repeated words or concepts seem to confuse it (make it less reliable).

This concept helps me a lot when tuning instructions: LLMs think in meaning not words.

1 Like

May we have the link of one problematic gpts?

Great thoughts!

Negations are always difficult. Redirections can be much more reliable (Instead of saying “Don’t say this”, say “Instead of saying this, say that”)

Absolutely. To me it feels like if you can ‘get it pointed in the right direction’ it works well, sometimes ridiculously well, and telling it what to do is often not that reliable.

Condensing things to the least amount of text possible seems to really help as well. Regular old gpt4 with a system message of ‘Your only role is to teach python.’ is enough to get it to act like a gpts with staggeringly good results. I started with 2 full paragraphs and by the time I was done it was those 7 words only and floored me with how well it worked. Replace python with chess, and you have a chess tutor.

2 Likes

Sorry, I was away from my machine. Yes, I will get that link and post it.

1 Like