I think that the API for any model before GPT-4, should return any type of value or a specific answer when the model is unable to answer the question (like for e.g; about events after 2021, and some other stuff it doesn’t know) because in this way devs could haver more control about how to handle this cases, like using another resources for the wanted information.
It would be nice, but that will be a bit tough to embed in the model. What you can do though is put instructions for the agent to stabilize and make their responses more predictable. There are a lot of good experiments going on with stabilizing input and output.
You can use step by step reasoning, two shot examples, etc. There is a whole slew of techniques. But it is up to the user to implement it. Here is a list of techniques you can use to help stabilize your agents.
Thanks Codie, I will check it out.