Hello,
From my understanding, when a ChatGPT plugin is activated, it gathers the response from the plugin in a JSON format, integrates it, and then delivers it to the user. I’m curious to know if there’s a way to automatically inject a prompt and influence the system’s behavior and demeanor, without the user having to explicitly set “Act as xxx” . Thanks
After some further research found this hypothesis is wrong. See follow on replies for details.
Original reply.
Note: I have not created a plugin yet but in the process so this is currently a hypothesis and not a known fact.
One of the required files (plugin manifest) requires a description to for the model, specifically description_for_model
Description better tailored to the model, such as token context length considerations or keyword usage for improved plugin prompting. 8,000 character max.
While the description of the field does not specifically mention what you note, in some ways it hints that it might be like a system prompt for a model.
So while this does not answer your question directly it does give leads to chase down that could give an affirmative to your question. Maybe someone who has created a plugin and used the field as such could show such a manifest file.
I have had some success in injecting prompts via a plugin.
One problem is the plugin has to be called to do anything. I’ve provided an ‘initialize’ endpoint. then you can say something like ‘ask plugin xyz to initialize’
initialize then returns your role prompt, along with something like ‘always adhere to these instructions’
in subsequent responses I use two fields, the normal response field and an ‘instruction’ field that gives whatever new instructions there are plus an ‘always adhere to base instructions’
You will have to experiment quite a bit to get it all to work, and I’m not sure how stable it is wrt evolving gpt, but I’ve found the basic idea of returning both a ‘response’ field and an ‘instructions’ field to be manageable. I’m not sure how well the ‘description_for_model’ would work as a prompt, although that seems like a great idea. At the very least, you could perhaps include there instructions to treat the ‘instructions’ field in the response as a prompt.
Thanks @EricGT and @bruce.dambrosio
Found this in one of the blog articles for HackWithGPT. He is also a member of this site.
The description should also not attempt to control the personality or mood of ChatGPT.
The problem is no reference(s) were given .
Found a reference to confirms this.
Your descriptions should not attempt to control the mood, personality, or exact responses of ChatGPT.
(ref)
I have used the prompt technique discussed in this thread
And It did wonder to me in terms of the quality of output generated by GPt-4.
The key here IMO is not to add that as description_for_the_model, use that field only to describe the capabilities your plugin offer and extend to GPT-4
And then in your very first prompt use it. I have done several testing and GPT-4 will follow the instructions pretty much to the letter, only couple of misses but it was able to auto correct itself according to my instructions I gave in the prompt.
Gained considerable amount of good output from these prompts.