I mean everyone must have noticed it. When you got custom instruction they are being ignored after couple of messages exchanged with the model.
But, I mean are they not the most important thing in the discussion? Like if the chat doesn’t follow them completely it is extremely depressing.
how about you do it like this:
system prompt: you are a bot
user: hey what is your name?
custom instructions: - answer the users question and don’t forget that your name is bob
bot: bob
next request:
system prompt: you are a bot
user: hey what is your name?
bot: bob
user: nice to meet you bob, my name is ralph - please make a list of all sorts of candy that exists
custom instructions: - answer the users question and don’t forget that your name is bob
bot: [this nearly exceeds the token limit]
next request:
system prompt: you are a bot
user: hey what is your name?
bot: bob
user: nice to meet you bob, my name is ralph - please make a list of all sorts of candy that exists
bot: [this nearly exceeds the token limit]
user: cool now name the ones that have sugar in it and rename them to your name
custom instructions: - answer the users question and don’t forget that your name is bob
…
I mean yes it will put the current chat in the middle of the conversation - but it gives us a chance to solve it …
maybe even a button “inject custom instruction” - or listing some so we can just click on it - or at least the memory which also doesn’t work could be used like that… let’s say i tell it “memorize your name is bob” and “I always want all letters to be capitalized”
and before the output is generated it will show the memories and let’s us select which one to apply to the answer…
I usually give my custom instructions a descriptive name and refer to them by that name in the initial messages of the conversation until the model picks up on the pattern. If the model doesn’t follow the instructions, the message is regenerated until it does.
I woudl suggst just for that chat, really this is what should happen with the personlisation settings but i’s not checked before each response as i’m always having to tell it to use british English (even tought that is specified in the personlisation).
That aside having the ability just for one chat would allow you to set specific instruction just for what you are working on. the key thing is for it to chwck the instructions each time before responding to ensure nothing is missed
When I fill the complete context window each turn of the conversation then I start a new conversation each time.
At least that’s what I expect will happen with a very large codebase.
Is it really like that or are older messages just summarized - at least that’s what it looks like to me.
And the first thing summarized are the custom instructions.
Yes, if the conversation continues, OpenAI’s context management will be used. While this can be helpful, it also means that we are not fully in control.
Like building it ourself just to finish it the same time they do?
Let’s wait for devday and then make plans and extensions. Last year devday meant plugins were dropped.
So who knows. Maybe custom gpt will be dropped this year.
An option in custom gpt actions that forces chatgpt to always query an endpoint would be cool.
That could be used to inject the system prompt after content/userprompt.
Would even make assistant and memory useless… a simple checkbox and 3 lines of code could make chatgpt perfect.
Just add that and don’t touch it for the next 10 years.
Always send the most recent user prompt to the backend and then the answer and we analyse or flag what is most important by our own system automatically.
This, sending the userprompt to the backend, could even be a webhook.
(charging the same as for the api for the webhook answers would be ok too - although I guess making it work would save alot too).
I mean this way a custom instruction gpt or a memory gpt could be build by others.
OpenAI could even provide infrastructure for that or make a contest out of it.