I’m wondering what people think of this developer problem - perhaps there is a design pattern for it…
I have a plugin endpoint that expects a string parameter. This value has to be recognized by the backend i.e. it has to be one of the many elements of a long list of accepted values.
The LLM can make reasonable guesses here… but also I know for a fact that (just by the nature of thing) it won’t get it right each time.
The list is too long to spell it out in the OpenAPI descriptor as an enum.
To give an example, my plugin is drawing historical maps of past kingdoms, empires, battles.
The name of the historical country is one of the input parameters.
Often it is provided such that it matches how it is named in the database of historical data… and sometimes it is not.
What is the right thing to do here - in addition to telling the AI that the input parameter could not be interpreted?
In case of history subject what I did is to provide another endpoint where the AI can look up historical countries that existed in the area of a present ay country in the given year. (without the year it would again be too slow and too long to list).
Maybe a version of this approach would work for other cases… i.e. let the AI look up the possible values based on input parameter that is less likely to be misinterpreted.
Does anyone have a better idea?