Does anyone know why the custom instructions aren’t working?
I’d guess since the DevDay update to GPT4-turbo that I’ve not been getting the responses I expect.
For example, I have:
- Get straight to code. I don’t want explanations at the start of your response. Put them at the end
- Give responses that get straight to code and provide the explanation/high level overview at the end - I don’t like chat gpt repeating what I said at the start of it’s responses
And the first thing in the response is the explanation.
Are those custom instructions seeming like they help and take effect for you?
Because I have: Always give me complete functions, don’t use placeholders.
These don’t seem to work / are ignored.
The battle with placeholder comments is especially rough. It can sometimes be 2+ times of asking for full code before the
// some logic goes here goes away. I have not checked whether saying “Please” helps.
Hello-- having the same issue here. In addition to significantly worse GPT-4 performance and Code Interpreter/Data-Analysis performance (like I can’t tell the difference between GPT-4 and GPT-3.5 anymore), my custom instructions haven’t worked at all since Dev Day.
Fingers crossed they find a solution-- GPT-4 with custom instructions lets me work 400% faster than pre-AI days. Feels like I’m moving at a snails pace now, back to human brain only
@mail.reknew the custom instructions aren’t working at all.
Seeing the same effects as @adopuspro
Just noticed something-- if I prompt hard enough, ChatGPT can access information in my Custom Prompts. However it’s still ignoring the instructions completely.
Seems like this might a performance issue; maybe GPT-4 has been lobotomized?
Noted the same behavior on my end yesterday.
I think passing in the custom instructions manually in the message as before will likely be a workaround until the situation is fixed.
It’s not that I like it but it’s also something that can be dealt with in many cases.
@vb @willwearing Great news-- we’re back, baby! Custom Instructions working and GPT-4 now working like normal again over here!
@adopuspro seeing the same!
Nice, glad it’s working for yall. Welcome to the forums as well, keep up the communication!
If you are using GPTs as well as custom instructions it could become complicated. I think CI are prioritized but things are moving quickly so that may not be the sase any more. Will look it up and edit as I find out
They don’t need to fix it. But he wants people to use it more efficiently. What you did before was something that should have been done long ago.
Are your custom instructions still working?
I have an assistant where I want it to reference 5 questions from a very simple file.
I clearly tell it to only ask questions from the file.
50% of the conversations it asks a question that is not in the file.
I am super frustrated!
Do you mean the results from custom GPT? and do you use some safty prompt to him ?
Sorry, I guess wrong thread here
I was meaning Assistants API / Playground.
Unfortunately, not working anymore for me. Burning through my allowed 40 messages without any of them really working!
This sounds the same as the general GPT-4 inability to obey instructions for coding anymore. You used to get full output, now it summarizes and often makes mistakes or leaves out core parts of the code required for it to run. It’s as if the ability to have control has been dumbed down and those custom instructions must be affected too. Since you were able to instruct in the prompt to get full code output previously, now you cannot, it won’t work anymore. It’s obvious and if you did do coding with it more than a few functions, it fails to work the same now.
Perhaps the Coup and fear of it becoming dangerous has to do with the dumbing down and removal of our ability to instruct it to a detail? Hmmm…
Bingo- you’re describing exactly what we’re seeing over here!
Same, instructions (old instructions or new GPTs instructions) now work badly for me - GPT often ignores them, especially for large dialogues. Also, when the instructions are working, they give different (subjectively worse) results than before the update. So, I think many of the commercial bots are now broken because their behavior has changed and now needs more instructions, CAPS, and other crutches to try to achieve the previous results.
Sounds like it’s a problem they are aware of now. Glad that there is recognition and instead of the feeling we are imagining it. Reddit seems to be full of people experiencing this now, some of us are just the Canaries in the coal mine it seems