Context
The chat models accept the messages
parameter instead of the prompt
parameter. Currently if you want the assistant’s reply to start with a specific sequence, you have to say, in the previous message, something like: “Your response should start with {thing:"
and should only be JSON5. It should not contain any text other than the JSON5, which doesn’t quote property names”. This works most of the time but is far from being ideal/robust in various situations.
(Note: The above is just an example - please don’t focus on this specific use case, it’s just a simple example to illustrate the point.)
Feature Request
Add the ability to also specify a prompt
parameter which is used to start off the assistant’s message. For example, the user message would say “Generate some JSON5 data with …” and then you’d set the prompt
parameter to {thing:"
to start the assistant off on the right track.
So, currently: Chat models accept messages
instead of prompt
.
Whereas, desired: Chat models accept both messages
and prompt
.